Oracle RMU, the Oracle Rdb management utility, lets database
administrators manage Oracle Rdb databases. Oracle RMU commands
are executed at the operating system prompt. Oracle RMU command
syntax follows the rules and conventions of the DIGITAL Command
Language (DCL).
See the RMUALTER help also.
Oracle RMU commands allow you to display the contents of database
files, control the Oracle Rdb monitor process, verify data
structures, perform maintenance tasks (such as backup and restore
operations), and list information about current database users
and database activity statistics.
Oracle RMU commands consist of words, generally verbs, that
have parameters and qualifiers to define the action to be
performed.
1 – Command Parameters
One or more spaces separate command parameters and their
qualifiers from the command keyword. Command parameters define
the file, index, or entity on which the command will act. In most
cases, you can omit the parameter from the command line and enter
it in response to a prompt.
In the following sample command, RMU/DUMP is the command keyword
and MF_PERSONNEL is the command parameter:
$ RMU/DUMP MF_PERSONNEL
When a storage area is a command parameter in an Oracle RMU
command, use the storage area name instead of the storage area
file specification. For example:
$ RMU/RESTORE/AREA MF_PERSONNEL.RBF EMPIDS_LOW/THRESHOLDS=(65,75,80)
Some commands, such as the RMU Backup command, require two or
more command parameters. If you provide all parameters (for
example, a root file specification and a backup file name),
there are no prompts. Other commands, such as the RMU Restore
command, have one required and one optional command parameter.
In this case, there are no prompts if you provide the backup
parameter but not the storage area parameter. However, if you do
not provide either parameter, Oracle RMU prompts for both.
2 – Command Qualifiers
Command qualifiers modify the behavior of an Oracle RMU command.
Although similar in appearance, command qualifiers are different
from the Oracle RMU commands themselves. The first (and sometimes
the subsequent) word that follows the RMU keyword is the command
itself. For instance, in the following example, /DUMP and /AFTER_
JOURNAL are part of the Oracle RMU command and thus must appear
in the order shown. /OPTION=STATISTICS and /LOG are command
qualifiers and can appear in any order after the Oracle RMU
command. You can determine which portions of an Oracle RMU
command are the command itself, and which portions are command
qualifiers by noting the documented name of the command,
$ RMU/DUMP/AFTER_JOURNAL aij_one.aij /OPTION=STATISTICS/LOG
Command qualifiers can be entered as upper-, lower-, or mixed-
case type. They always begin with a slash (/) followed by a
qualifier word.
In some cases, an equal sign (=) and a qualifier value follow
the qualifier word. A qualifier value can be simple (a number,
a string, or a keyword) or compound (a list of numbers, strings,
or keywords separated by commas, enclosed in parentheses) or an
indirect command file name. For information on using indirect
command files, see Indirect-Command-Files.
A default value for a qualifier indicates what qualifier will be
used if you omit the qualifier completely. Omitting a qualifier
is not the same thing as specifying a qualifier with a default
argument.
Command qualifiers influence the overall action of a command.
Command qualifiers must be placed following the command keyword
but before any parameters.
In the following example, the command qualifier, Users,
immediately follows the Dump keyword and precedes the command
parameter, mf_personnel:
$ RMU/DUMP/USERS MF_PERSONNEL
Parameter qualifiers (also referred to as file qualifiers or area
qualifiers) affect the treatment of parameters in the command.
If the command includes multiple instances of a given type of
parameter, the placement of parameter qualifiers affects their
scope of influence as follows:
o If you position the parameter qualifier after a particular
parameter, the qualifier affects only that parameter. This is
local use of a parameter qualifier.
o If you position the parameter qualifier before the first
parameter, the qualifier applies to all instances of the
parameter. This is global use of a parameter qualifier. Not
all parameter qualifiers can be used globally. To identify
such qualifiers, read the description of the qualifier.
o If you position the parameter qualifier after a parameter, the
qualifier applies only to that instance of the parameter.
Local parameter qualifiers take precedence over global
parameter qualifiers, in most cases. Exceptions are documented
in the qualifier descriptions for each Oracle RMU command.
The following example demonstrates the local use of the area
qualifier, Thresholds, to change the threshold settings for the
EMPIDS_LOW area:
$ RMU/RESTORE MF_PERSONNEL EMPIDS_LOW/THRESHOLDS=(70,80,90)
Note that if you specify a qualifier in both the negative and
positive forms, the last occurrence of the qualifier is the one
that takes effect. For example, the Nolog qualifier takes effect
in this command:
$ RMU/BACKUP/LOG/NOLOG MF_PERSONNEL MF_PERS
This is consistent with DCL behavior for negative and positive
qualifiers.
3 – Indirect-Command-Files
Numerous Oracle RMU command operations accept lists of names
as values for certain qualifiers, such as the Areas= or Lareas=
qualifiers. The command syntax can easily exceed the maximum
length of 1024 characters accepted by DCL. To overcome the
problem of syntax that is too long, you can include the names
in an indirect command file and specify the indirect command
file following the qualifier. Throughout this manual, this is
commonly referred to as using an indirect file reference. Note
that indirect command files can be nested to any depth.
Each indirect command file (default file extension .opt) contains
a list of names with one name per line. A comment, preceded by
an exclamation point, can be appended to a name, or it can be
inserted between lines. A reference to an indirect command file
in the list must be preceded by an at sign (@) and enclosed in
quotation marks (""). For example: "@EMPIDS".
The following example shows the contents of an indirect command
file called empids.opt. It lists the EMPIDS_LOW, EMPIDS_MID, and
EMPIDS_OVER storage areas. The last line in the example shows how
you would reference the indirect command file in an Oracle RMU
command line with the required quotation marks.
$ TYPE EMPIDS.OPT
EMPIDS_LOW ! Employee Areas
EMPIDS_MID
EMPIDS_OVER
$ RMU/ANALYZE/AREA="@EMPIDS" MF_PERSONNEL ! ANALYZE EMPLOYEE AREAS
4 – Required Privileges
An access control list (ACL) is created by default on the root
file of each Oracle Rdb database. To be able to use a particular
Oracle RMU command for the database, you must be granted
the appropriate Oracle RMU privilege for that command in the
database's root file ACL. For some Oracle RMU commands, you must
have one or more OpenVMS privileges as well as the appropriate
Oracle RMU privilege to be able to use the command.
Note that the root file ACL created by default on each Oracle Rdb
database controls only your Oracle RMU access to the database (by
specifying privileges that will allow a user or group of users
access to specific Oracle RMU commands). Root file ACLs do not
control your access to the database with SQL (structured query
language) statements. See Show Privilege for information on how
to display your Oracle RMU access to the database.
Your access to a database with SQL statements is governed by
the privileges granted to you in the database ACL (the ACL that
is displayed when you use the SQL SHOW PROTECTION ON DATABASE
command).
Privileges Required for Oracle RMU Commands shows the Oracle RMU
privileges you must have to use each Oracle RMU command. When
more than one Oracle RMU privilege appears in the Required Oracle
RMU Privileges column, if you have any of the listed Oracle RMU
privileges, you will pass the Oracle RMU privilege check for the
specified Oracle RMU command.
If the Oracle RMU command requires a user to have one or more
OpenVMS privileges in addition to the appropriate Oracle RMU
privileges, the OpenVMS privileges are shown in the Required
OpenVMS Privileges column of Privileges Required for Oracle RMU
Commands. When more than one OpenVMS privilege is listed in the
Required OpenVMS Privileges column, you must have all of the
listed OpenVMS privileges to pass the OpenVMS privilege check for
the Oracle RMU command.
The OpenVMS Override Privileges column of Privileges Required
for Oracle RMU Commands shows one or more OpenVMS privileges
that allow a user without the appropriate required Oracle RMU and
OpenVMS privileges for an Oracle RMU command to use the command
anyway. When more than one OpenVMS privilege is listed in the
OpenVMS Override Privileges column, you can use the specified
Oracle RMU command if you have any of the listed privileges.
Table 1 Privileges Required for Oracle RMU Commands
Required
Oracle Required
Oracle RMU RMU OpenVMS OpenVMS Override
Command Privileges Privileges Privileges
Alter RMU$ALTER SYSPRV, BYPASS
Analyze Areas RMU$ANALYZE SYSPRV, BYPASS
Analyze RMU$ANALYZE SYSPRV, BYPASS
Cardinality
Analyze Indexes RMU$ANALYZE SYSPRV, BYPASS
Analyze RMU$ANALYZE SYSPRV, BYPASS
Placement
Backup RMU$BACKUP SYSPRV, BYPASS
Backup After_ RMU$BACKUP SYSPRV, BYPASS
Journal
Backup Plan RMU$BACKUP SYSPRV, BYPASS
Checkpoint RMU$BACKUP, WORLD
RMU$OPEN
Close RMU$OPEN WORLD
Collect RMU$ANALYZE SYSPRV, BYPASS
Optimizer_
Statistics
Convert RMU$CONVERT, SYSPRV, BYPASS
RMU$RESTORE
Copy_Database RMU$COPY SYSPRV, BYPASS
Delete RMU$ANALYZE SYSPRV, BYPASS
Optimizer_
Statistics
Dump After_ RMU$DUMP SYSPRV, BYPASS
Journal
Dump Areas RMU$DUMP SYSPRV, BYPASS
Dump Backup_File RMU$DUMP, READ BYPASS
RMU$BACKUP,
RMU$RESTORE
Dump Export READ BYPASS
Dump Header RMU$DUMP, SYSPRV, BYPASS
RMU$BACKUP,
RMU$OPEN
Dump Lareas RMU$DUMP SYSPRV, BYPASS
Dump Recovery_ READ BYPASS
Journal
Dump Row Cache RMU$DUMP SYSPRV, BYPASS
Dump Snapshots RMU$DUMP SYSPRV, BYPASS
Dump Users RMU$DUMP, WORLD
RMU$BACKUP,
RMU$OPEN
Extract RMU$UNLOAD SYSPRV, BYPASS
Insert Optimizer RMU$ANALYZE SYSPRV,
Statistics BYPASS
Load RMU$LOAD SYSPRV, BYPASS
Load Audit RMU$SECURITY SECURITY, BYPASS
Load Plan RMU$LOAD SYSPRV, BYPASS
Monitor Reopen_ WORLD, SETPRV
Log CMKRNL,
DETACH,
PSWAPM,
ALTPRI,
SYSGBL,
SYSNAM,
SYSPRV,
BYPASS
Monitor Start WORLD, SETPRV
CMKRNL,
DETACH,
PSWAPM,
ALTPRI,
PRMMBX,
SYSGBL,
SYSNAM,
SYSPRV,
BYPASS
Monitor Stop WORLD, SETPRV
CMKRNL,
DETACH,
PSWAPM,
ALTPRI,
PRMMBX,
SYSGBL,
SYSNAM,
SYSPRV,
BYPASS
Move_Area RMU$MOVE SYSPRV, BYPASS
Open RMU$OPEN WORLD
Optimize After_ RMU$BACKUP, SYSPRV, BYPASS
Journal RMU$RESTORE
Reclaim RMU$ALTER SYSPRV, BYPASS
Recover RMU$RESTORE SYSPRV, BYPASS
Recover Resolve RMU$RESTORE SYSPRV, BYPASS
Repair RMU$ALTER SYSPRV, BYPASS
Resolve RMU$RESTORE SYSPRV, BYPASS
Restore RMU$RESTORE SYSPRV, BYPASS
Restore Only_ RMU$RESTORE SYSPRV, BYPASS
Root
Server After_ RMU$OPEN WORLD
Journal Reopen_
Output
Server After_ RMU$OPEN WORLD
Journal Start
Server After_ RMU$OPEN WORLD
Journal Stop
Server Backup_ RMU$OPEN WORLD
Journal Resume
Server Backup_ RMU$OPEN WORLD
Journal Suspend
Server Record_ RMU$OPEN WORLD
Cache
Set After_ RMU$ALTER, SYSPRV, BYPASS
Journal RMU$BACKUP,
RMU$RESTORE
Set AIP RMU$DUMP SYSPRV, BYPASS
Set Audit RMU$SECURITY SECURITY, BYPASS
Set Buffer RMU$ALTER SYSPRV, BYPASS
Object
Set Corrupt_ RMU$ALTER, SYSPRV, BYPASS
Pages RMU$BACKUP,
RMU$RESTORE
Set Galaxy RMU$ALTER SYSPRV, BYPASS
Set Global RMU$ALTER SYSPRV, BYPASS
Buffers
Set Logminer RMU$ALTER, SYSPRV, BYPASS
RMU$BACKUP,
RMU$RESTORE
Set Privilege RMU$SECURITY SECURITY, BYPASS
Set Row_Cache RMU$ALTER SYSPRV, BYPASS
Set Shared RMU$ALTER SYSPRV, BYPASS
Memory
Show After_ RMU$BACKUP, SYSPRV, BYPASS
Journal RMU$RESTORE,
RMU$VERIFY
Show AIP RMU$DUMP SYSPRV, BYPASS
Show Audit RMU$SECURITY SECURITY, BYPASS
Show Corrupt_ RMU$BACKUP, SYSPRV, BYPASS
Pages RMU$RESTORE,
RMU$VERIFY
Show Locks WORLD
Show Optimizer_ RMU$ANALYZE, SYSPRV, BYPASS
Statistics RMU$SHOW
Show Privilege RMU$SECURITY SECURITY, BYPASS
Show Statistics RMU$SHOW SYSPRV, BYPASS, WORLD
Show System WORLD
Show Users RMU$SHOW, WORLD
RMU$BACKUP,
RMU$OPEN
Show Version
Unload RMU$UNLOAD SYSPRV, BYPASS
Unload After_ RMU$DUMP SYSPRV, BYPASS
Journal
Verify RMU$VERIFY SYSPRV, BYPASS
5 – Alter
Invokes the RdbALTER utility for Oracle Rdb.
NOTE
Oracle Corporation recommends that the RdbALTER utility be
used only as a last resort to provide a temporary patch to a
corrupt database. The RdbALTER utility should not be used as
a routine database management tool.
Use the RdbALTER utility only after you fully understand the
internal data structure, know the information the database
should contain, and know the full effects of the command.
Because of the power of the RdbALTER utility and the
cascading effects it can have, Oracle Corporation recommends
that you experiment on a copy of the damaged database before
applying the RdbALTER utility to a production database.
To invoke the RdbALTER utility, enter the RMU Alter command in
the following format:
$ RMU/ALTER [root-file-spec]
The optional root file parameter identifies the database you want
to alter. If you specify this parameter, you automatically attach
to the specified database. If you do not specify this parameter,
you must use the RdbALTER ATTACH command. See the RdbALTER Help
for more information on the ATTACH command.
The RMU Alter command responds with the following prompt:
RdbALTER>
This prompt indicates that the system expects RdbALTER command
input.
To access the RdbALTER Help file, enter the following:
RdbALTER> HELP
To use the RMU Alter command for a database, you must have the
RMU$ALTER privilege in the root file ACL for the database or the
OpenVMS SYSPRV or BYPASS privilege. You must have the OpenVMS
SYSPRV or BYPASS privilege if you are using an RMU Alter command
to change a file name.
6 – Analyze
Displays information about stored and actual cardinality values
for tables and indexes, database space utilization in the
database, index structures for the database, or the accessibility
through indexes of data records in the database.
6.1 – Database
Gathers and displays statistics on how the database uses storage,
logical area, or page space.
6.1.1 – Description
The RMU Analyze command provides a maintenance tool for database
administrators. It generates a formatted display of statistical
information that describes storage utilization in the database.
Information is displayed selectively for storage areas and
logical areas, or for a range of pages in a storage area. You
can use the RMU Analyze command to analyze the following:
o Space utilization for database pages
o Space utilization for storage areas
o Space utilization for logical areas
6.1.2 – Format
(B)0[mRMU/Analyze root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Areas[=storage-area-list] x /Areas
/[No]Binary_Output=file-option-list x /Nobinary_Output
/End=integer x /End=last-page
/Exclude=(options) x No logical areas excluded
/[No]Lareas[=logical-area-list] x /Lareas
/Option = {Normal | Full | Debug} x /Option=Normal
/Output=file-name x /Output=SYS$OUTPUT
/Start = integer x /Start=first-page
6.1.3 – Parameters
6.1.3.1 – root-file-spec
The file specification for the database root file to be analyzed.
The default file extension is .rdb.
6.1.4 – Command Qualifiers
6.1.4.1 – Areas
Areas[=storage-area-list]
Areas=*
Specifies the storage areas to be analyzed. You can specify each
storage area by name or by the area's ID number.
The default, the Areas qualifier, results in analysis of all
storage areas. You can also specify the Areas=* qualifier to
analyze all storage areas. If you specify more than one storage
area, separate the storage area names or ID numbers in the
storage-area-list parameter with a comma and enclose the list
in parentheses. If you omit the Areas qualifier, information for
all the storage areas is displayed.
You can use the Start and End qualifiers with the Areas
qualifier to analyze specific pages. If you use the Start and
End qualifiers when you specify more than one storage area in the
storage-area-list parameter, the same specified range of pages
are analyzed in each specified storage area.
The Areas qualifier can be used with an indirect command file.
See the Indirect-Command-Files help entry for more information.
6.1.4.2 – Binary Output
Binary_Output=file-option-list
Nobinary_Output
Allows you to direct the summary results to a binary file, and
to create a record definition file that is compatible with the
data dictionary for the binary output file. The binary output
file can be loaded into an Oracle Rdb database by using the RMU
Load command with the Record_Definition qualifier for use by a
user-written management application or procedure. The binary
output can also be used directly by the user-written application
or procedure.
The valid file options are:
o File=file-spec
The File option causes the Analyze command data to be stored
in an RMS file that contains a fixed-length binary record for
each storage area and logical area analyzed. The default file
extension for the binary output file is .unl. The following
command creates the binary output file analyze_out.unl:
$ RMU/ANALYZE/BINARY_OUTPUT=FILE=ANALYZE_OUT MF_PERSONNEL.RDB
o Record_Definition=file-spec
The Record_Definition option causes the Analyze command
data record definition to be stored in an RMS file. The
output file contains the definition in a subset of the data
dictionary command format, a format very similar to RDO field
and relation definitions. The default file extension for the
record definition output file is .rrd. The following command
creates the output file analyze_out.rrd:
$ RMU/ANALYZE/BINARY_OUTPUT=RECORD_DEFINITION=ANALYZE_OUT -
_$ MF_PERSONNEL.RDB
You can specify both file options in one command by separating
them with a comma and enclosing them within parentheses, for
example:
$ RMU/ANALYZE/BINARY_OUTPUT= -
_$ (FILE=ANALYZE_OUT,RECORD_DEFINITION=ANALYZE_OUT) -
_$ MF_PERSONNEL.RDB
If you specify the Binary_Output qualifier, you must specify
at least one of the options. The default is the Nobinary_Output
qualifier, which does not create an output file.
6.1.4.3 – End
End=integer
Specifies the ending page number for the analysis. The default is
the end of the storage area file.
6.1.4.4 – Exclude
Exclude=System_Records
Exclude=Metadata
Exclude=(System_Records, Metadata)
Excludes information from the RMU Analyze command output. You
can specify Exclude=System_Records or Exclude=Metadata, or both.
If you specify both options, enclose them within parentheses and
separate each option with a comma.
When you do not specify the Exclude qualifier, data is provided
for all the logical areas in the database.
The options are as follows:
o System_Records
Information on the RDB$SYSTEM_RECORDS logical areas is
excluded from the Analyze command output.
o Metadata
Information on all the Oracle Rdb logical areas (for example,
the RDB$SYSTEM_RECORDS and RDB$COLLATIONS_NDX logical areas)
is excluded from the RMU Analyze command output.
Data is accumulated for the logical areas excluded with the
Exclude qualifier, but the data is excluded from the Analyze
output.
You cannot use the Exclude qualifier and the Lareas qualifier in
the same RMU Analyze command.
6.1.4.5 – Lareas
Lareas[=logical-area-list]
Lareas=*
Nolareas
Specifies the logical areas to be analyzed. Each table in the
database is associated with a logical area name. The default, the
Lareas qualifier, results in analysis of all logical areas. You
can also specify the Lareas=* qualifier to analyze all logical
areas. If you specify more than one logical area name, separate
the logical area names in the logical-area-list with a comma and
enclose the list in parentheses.
The Lareas qualifier can be used with indirect command files. See
the Indirect-Command-Files help entry for more information.
6.1.4.6 – Option
Option=Normal
Option=Full
Option=Debug
Specifies the type of information and level of detail the
analysis will include. Three types of output are available:
o Normal
Output includes only summary information. The Normal option is
the default.
o Full
Output includes histograms and summary information.
o Debug
Output includes internal information about the data, as well
as histograms and summary information. In general, use the
Debug option for diagnostic support purposes. You can also use
the Debug option to extract data and perform an independent
analysis.
6.1.4.7 – Output
Output=file-name
Specifies the name of the file where output will be sent. The
default file extension is .lis. If you do not specify the Output
qualifier, the output is sent to SYS$OUTPUT.
6.1.4.8 – Start
Start=integer
Specifies the starting page number for the analysis. The default
is 1.
6.1.5 – Usage Notes
o To use the RMU Analyze command for a database, you must
have the RMU$ANALYZE privilege in the root file ACL for the
database or the OpenVMS SYSPRV or BYPASS privilege.
o When the RMU Analyze command is issued for a closed database,
the command executes without other users being able to attach
to the database.
o Detected asynchronous prefetch should be enabled to achieve
the best performance of this command. Beginning with Oracle
Rdb V7.0, by default, detected asynchronous prefetch is
enabled. You can determine the setting for your database by
issuing the RMU Dump command with the Header qualifier.
If detected asynchronous prefetch is disabled, and you do not
want to enable it for the database, you can enable it for your
Oracle RMU operations by defining the following logicals at
the process level:
$ DEFINE RDM$BIND_DAPF_ENABLED 1
$ DEFINE RDM$BIND_DAPF_DEPTH_BUF_CNT P1
P1 is a value between 10 and 20 percent of the user buffer
count.
o The following RMU Analyze command directs the results into a
record definition file called db.rrd. This file is compatible
with the syntax for creating new columns and tables in the
data dictionary.
$ RMU/ANALYZE/BINARY_OUTPUT=RECORD_DEFINITION=DB.RRD MF_PERSONNEL
$! Display the db.rrd file created by the previous command:
$ TYPE DB.RRD
DEFINE FIELD RMU$DATE DATATYPE IS DATE.
DEFINE FIELD RMU$AREA_NAME DATATYPE IS TEXT SIZE IS 32.
DEFINE FIELD RMU$STORAGE_AREA_ID DATATYPE IS SIGNED WORD.
DEFINE FIELD RMU$FLAGS DATATYPE IS SIGNED WORD.
DEFINE FIELD RMU$TOTAL_BYTES DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$EXPANDED_BYTES DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$FRAGMENTED_BYTES DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$EXPANDED_FRAGMENT_BYTES DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$TOTAL_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$FRAGMENTED_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$FRAGMENT_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$PAGE_LENGTH DATATYPE IS SIGNED WORD.
DEFINE FIELD RMU$MAX_PAGE_NUMBER DATATYPE IS SIGNED LONGWORD.
DEFINE FIELD RMU$FREE_BYTES DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$OVERHEAD_BYTES DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$AIP_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$ABM_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$SPAM_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$INDEX_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$BTREE_NODE_BYTES DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$HASH_BYTES DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DUPLICATES_BYTES DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$OVERFLOW_BYTES DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$LOGICAL_AREA_ID DATATYPE IS SIGNED WORD.
DEFINE FIELD RMU$RELATION_ID DATATYPE IS SIGNED WORD.
DEFINE FIELD RMU$RECORD_ALLOCATION_SIZE DATATYPE IS SIGNED WORD.
DEFINE FIELD RMU$TOTAL_SPACE DATATYPE IS F_FLOATING.
DEFINE RECORD RMU$ANALYZE_AREA.
.
.
.
o The following list describes each of the fields in the db.rrd
record definition:
- RMU$DATE
Contains the date that the Analyze operation was done
- RMU$AREA_NAME
Contains the name of the storage area that was analyzed
- RMU$STORAGE_AREA_ID
Contains the area ID of the storage area that was analyzed
- RMU$FLAGS
The three possible values in this field have the following
meanings:
* 0-Indicates that the record is a storage area record,
not a logical area record
* 1-Indicates that data compression is not enabled for the
logical area
* 3-Indicates that data compression is enabled for the
logical area
- RMU$TOTAL_BYTES
Contains the total size of the data stored in the logical
area
- RMU$EXPANDED_BYTES
Contains the total size of the stored data in the logical
area after decompression
- RMU$FRAGMENTED_BYTES
Contains the number of bytes in the stored fragments
- RMU$EXPANDED_FRAGMENT_BYTES
Contains the number of bytes in the stored fragments after
decompression
- RMU$TOTAL_COUNT
Contains the total number of records stored
- RMU$FRAGMENTED_COUNT
- Contains the number of fragmented records
- RMU$FRAGMENT_COUNT
Contains the number of stored fragments
- RMU$PAGE_LENGTH
Contains the length in bytes of a database page in the
storage area
- RMU$MAX_PAGE_NUMBER
Contains the page number of the last initialized page in
the storage area
- RMU$FREE_BYTES
Contains the number of free bytes in the storage area
- RMU$OVERHEAD_BYTES
Contains the number of bytes used for overhead in the
storage area
- RMU$AIP_COUNT
Contains the number of the area inventory pages (AIPs) in
the storage area
- RMU$ABM_COUNT
Contains the number of area bit map (ABM) pages in the
storage area
- RMU$SPAM_COUNT
Contains the number of space area management (SPAM) pages
in the storage area
- RMU$INDEX_COUNT
Contains the number of index records in the storage area
- RMU$BTREE_NODE_BYTES
Contains the number of bytes for sorted indexes in the
storage area
- RMU$HASH_BYTES
Contains the number of bytes for hashed indexes in the
storage area
- RMU$DUPLICATES_BYTES
Contains the number of bytes for duplicate key values for
sorted indexes in the storage area
- RMU$OVERFLOW_BYTES
Contains the number of bytes for hash bucket overflow
records in the storage area
- RMU$LOGICAL_AREA_ID
Contains the logical area ID of the logical area that was
analyzed
- RMU$RELATION_ID
Contains the record type of the row in the logical area
that was analyzed
- RMU$RECORD_ALLOCATION_SIZE
Contains the size of a row when the table was initially
defined
- RMU$TOTAL_SPACE
Contains the number of bytes available for storing user
data in the logical area (used space + free space +
overhead)
6.1.6 – Examples
Example 1
The following command analyzes the EMPIDS_LOW and EMP_INFO
storage areas in the mf_personnel database:
$ RMU/ANALYZE/AREAS=(EMPIDS_LOW,EMP_INFO)/OUTPUT=EMP.OUT -
_$ MF_PERSONNEL.RDB
Example 2
Both of the following commands analyze the DEPARTMENTS and
SALARY_HISTORY storage areas in the mf_personnel database:
$! Using storage area names to specify storage areas
$ RMU/ANALYZE/AREAS=(DEPARTMENTS,SALARY_HISTORY) MF_PERSONNEL.RDB -
$ /OUTPUT=DEP_SAL.OUT
$!
$! Using storage area ID numbers to specify storage areas
$ RMU/ANALYZE/AREAS=(2,9) MF_PERSONNEL.RDB /OUTPUT=DEP_SAL.OUT
6.2 – Cardinality
Generates a formatted display of the actual and stored
cardinality values for specified tables and indexes. Also, if
the stored cardinality values are different from the actual
cardinality values, the RMU Analyze Cardinality command allows
you to update the stored cardinality values.
NOTE
Beginning in Oracle Rdb Version 7.0, the RMU Analyze
Cardinality command has been deprecated and might be removed
in future versions of Oracle Rdb. The features available
through this command are now available through the RMU
Collect Optimizer_Statistics command and the RMU Show
Optimizer_Statistics command.
In addition, updating cardinality information for indexes
using the RMU Analyze Cardinality command may cause poor
performance because the prefix cardinality information is
not collected.
Therefore, Oracle Corporation recommends that you use the
RMU Collect Optimizer_Statistics and RMU Show Optimizer_
Statistics commands instead of the RMU Analyze Cardinality
command.
See Collect_Optimizer_Statistics and Show Optimizer_
Statistics for information on the RMU Collect Optimizer_
Statistics and the RMU Show Optimizer_Statistics commands.
6.2.1 – Description
The actual cardinality values for tables and indexes can be
different from the stored cardinality values in your database's
RDB$SYSTEM storage area if RDB$SYSTEM has been set to read-
only access. When rows are added to or deleted from tables and
indexes after the RDB$SYSTEM storage area has been set to read-
only access, the cardinality values for these tables and indexes
are not updated.
For indexes, the cardinality value is the number of unique
entries for an index that allows duplicates. If the index is
unique, Oracle Rdb stores zero for the cardinality, and uses the
table cardinality instead. For tables, the cardinality value is
the number of rows in the table. Oracle Rdb uses the cardinality
values of indexes and tables to influence decisions made by the
optimizer. If the actual cardinality values of tables and indexes
are different from the stored cardinality values, the optimizer's
performance can be adversely affected.
When you use the SQL ALTER DATABASE statement to set the
RDB$SYSTEM storage area to read-only access for your database,
the Oracle Rdb system tables in the RDB$SYSTEM storage area are
also set to read-only access. When the Oracle Rdb system tables
are set to read-only access:
o Automatic updates to table and index cardinality are disabled.
o Manual changes made to the cardinalities to influence the
optimizer are not allowed.
o The I/O associated with the cardinality update is eliminated.
With the RMU Analyze Cardinality command, you can:
o Display the stored and actual cardinality values for the
specified tables and indexes.
o Update the stored cardinality value for a specified table
or index with either the actual value or an alternative
value of your own choosing. Oracle Corporation recommends
that you update the stored cardinality value with the actual
cardinality value. Specifying a value other than the actual
cardinality value can result in poor database performance.
6.2.2 – Format
(B)0[mRMU Analyze/Cardinality root-file-spec [table-or-index-name[,...]]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Confirm x /Noconfirm
/Output = file-name x /Output = SYS$OUTPUT
/Transaction_Type=option x /Transaction_Type=Automatic
/[No]Update x /Noupdate
6.2.3 – Parameters
6.2.3.1 – root-file-spec
The name of the database root file for which you want
information. The default file extension is .rdb. This parameter
is required.
6.2.3.2 – table-or-index-name
table-or-index-name[,...]
The name of the table or index for which you want information
about cardinality. The default is all tables and all enabled
indexes. If you want information about a disabled index, you must
specify it by name.
If you do not accept the default and instead specify a table
name, the RMU Analyze Cardinality command and any qualifiers
you specify will affect only the named table; the command will
not result in a display or update (if the Update qualifier is
specified) of the indexes associated with the table.
This parameter is optional. An indirect file reference can
be used. See the Indirect-Command-Files help entry for more
information.
6.2.4 – Command Qualifiers
6.2.4.1 – Confirm
Confirm
Noconfirm
Specify the Confirm qualifier with the Update qualifier to
gain more control over the update function. When you specify
the Confirm qualifier, you are asked whether the update should
be performed for each selected table or index whose stored
cardinality value is different from its actual cardinality value.
You can respond with YES, NO, QUIT, or an alternative value for
the stored cardinality.
Specifying YES means that you want to update the stored
cardinality with the actual cardinality value. Specifying NO
means that you do not want to update the stored cardinality
value. Specifying QUIT aborts the RMU Analyze Cardinality
command, rolls back any changes you made to stored cardinalities,
and returns you to the operating system prompt. Specifying an
alternative value updates the stored cardinality value with the
alternative value.
When you specify the Noconfirm qualifier, you are not given the
option of updating stored cardinality values with an alternative
value of your own choosing. Instead, the stored cardinality
values that differ from the actual cardinality values are
automatically updated with the actual cardinality values.
The default is the Noconfirm qualifier.
The Confirm and Noconfirm qualifiers are meaningless and are
ignored if they are specified without the Update qualifier.
6.2.4.2 – Output
Output=file-name
Specifies the name of the file where output will be sent. The
default is SYS$OUTPUT. The default output file type is .lis, if
you specify a file name.
6.2.4.3 – Transaction Type
Transaction_Type=option
Allows you to specify the transaction mode for the transactions
used to perform the analyze operation. Valid options are:
o Automatic
o Read_Only
o Noread_Only
You must specify an option if you use this qualifier.
If you do not specify any form of this qualifier, the
Transaction_Type=Automatic qualifier is the default. This
qualifier specifies that Oracle RMU is to determine the
transaction mode used for the analyze operation. If any storage
area in the database (including those not accessed for the
analyze operation) has snapshots disabled, the transactions used
for the analyze operation are set to read/write mode. Otherwise,
the transactions are set to read-only mode.
The Transaction_Type=Read_Only qualifier specifies the
transactions used to perform the analyze operation be set to
read-only mode. When you explicitly set the transaction type to
read-only, snapshots need not be enabled for all storage areas
in the database, but must be enabled for those storage areas that
are analyzed. Otherwise, you receive an error and the analyze
operation fails.
You might select this option if not all storage areas have
snapshots enabled and you are analyzing objects that are stored
only in storage areas with snapshots enabled. In this case, using
the Transaction_Type=Read_Only qualifier allows you to perform
the analyze operation and impose minimal locking on other users
of the database.
The Transaction_Type=Noread_Only qualifier specifies that
the transactions used to for the analyze operation be set to
read/write mode. You might select this option if you want to
eradicate the growth of snapshot files that occurs during a read-
only transaction and are willing to incur the cost of increased
locking that occurs during a read/write transaction.
6.2.4.4 – Update
Update
Noupdate
Specify the Update qualifier to update the stored cardinality
values of tables and indexes. You can perform an update only when
the stored cardinality values differ from the actual cardinality
values. When updating cardinality values, Oracle Corporation
recommends that you update the stored cardinality values with
the actual cardinality values, not with an alternative value
of your own choosing. Specifying a value other than the actual
cardinality value can result in poor database performance. The
default is the Noupdate qualifier.
Using the Update qualifier allows you to update the stored
cardinality values of the specified tables and indexes even when
the RDB$SYSTEM storage area is designated for read-only access.
If you have set the RDB$SYSTEM storage area to read-only access,
Oracle RMU sets it to read/write during execution of the RMU
Analyze Cardinality command with the Update qualifier. Oracle RMU
resets the area to read-only when the operation completes.
If you are updating the stored cardinality for a table or index,
and a system failure occurs before the RDB$SYSTEM storage area is
changed back to read-only access, use the SQL ALTER DATABASE
statement to manually change the database back to read-only
access.
However, note that if you have set the area to read-only, the
update operation specified with the Update qualifier commences
only if the database is off line or the database is quiescent.
If you specify a table name parameter with an RMU Analyze
Cardinality command that includes the Update qualifier, the
associated indexes are not updated; you must specify each table
and index you want to be updated or accept the default (by not
specifying any table or index names) and have all items updated.
Oracle Corporation recommends that you use the Update qualifier
during offline operations or during a period of low update
activity. If you update a cardinality while it is changing
(as a result of current database activity), the end result is
unpredictable.
Specify the Noupdate qualifier when you want to display the
stored and actual cardinality values only for the specified
tables and indexes.
6.2.5 – Usage Notes
o To use the RMU Analyze Cardinality command for a database, you
must have the RMU$ANALYZE privilege in the root file ACL for
the database or the OpenVMS SYSPRV or BYPASS privilege.
o You must have the SQL ALTER privilege for the database to
update a read-only RDB$SYSTEM storage area.
o If you specify a name for the table-or-index-name parameter
that is both an index name and a table name, the RMU Analyze
Cardinality command performs the requested operation for both
the table and index.
o Although you can alter the cardinality of a unique index
using the RMU Analyze Cardinality command, it has no effect.
(A unique index has only unique keys and does not have any
duplicate keys.) Because the cardinality of a unique index and
the table it indexes are the same, Oracle Rdb uses the table
cardinality value when performing operations that involve
the cardinality of a unique index. Oracle Rdb does not use
the cardinality value stored for a unique index, nor does it
attempt to update this value as rows are stored or deleted.
o When the RMU Analyze Cardinality command is issued for a
closed database, the command executes without other users
being able to attach to the database.
6.2.6 – Examples
Example 1
The following command provides information on the cardinality for
all indexes and tables in the sample mf_personnel database:
$ RMU/ANALYZE/CARDINALITY/NOUPDATE MF_PERSONNEL.RDB /OUTPUT=CARD.LIS
Example 2
The following command provides information on the cardinality for
the EMPLOYEES table in the mf_personnel database:
$ RMU/ANALYZE/CARDINALITY/NOUPDATE MF_PERSONNEL.RDB EMPLOYEES -
_$ /OUTPUT=EMP.LIS
6.3 – Indexes
Generates a formatted display of statistical information that
describes the index structures for the database.
6.3.1 – Description
The RMU Analyze Indexes command provides a maintenance tool for
analyzing index structures and generates a formatted display
of this statistical information. Information is displayed
selectively for storage areas and logical areas, or for a range
of pages in a storage area. You can use the RMU Analyze Indexes
command to analyze the structures of both sorted (including
ranked sorted) and hashed indexes. The following shows sample
output from the RMU Analyze Index command:
$ RMU/ANALYZE/INDEXES MF_PERSONNEL.RDB JH_EMPLOYEE_ID_RANKED
----------------------------------------------------------------------------
Indices for database - RDBVMS_DISK1:[DB]MF_PERSONNEL.RDB;
----------------------------------------------------------------------------
Index JH_EMPLOYEE_ID_RANKED for relation JOB_HISTORY duplicates allowed
Max Level: 3, Nodes: 34, Used/Avail: 8693/13532 (64%), Keys: 133, Records: 0
Duplicate nodes:0, Used/Avail: 0/0 (0%), Keys: 100, Maps: 100, Records:4113
Total Comp/Uncomp IKEY Size: 600/798, Compression Ratio: .75
----------------------------------------------------------------------------
Data included in the statistics display includes the following
information:
o The first line of output identifies the database in which the
analyzed index resides.
o The second line of output:
- Specifies if the index is a hashed index. In the example,
the index is not hashed, so the term hashed does not
appear.
- The index name
- Whether or not duplicates are allowed.
o Third line of output:
- Max Level
The maximum number of levels in the index.
- Nodes
The total number of nodes in the index.
- Used/Avail (%)
The number of bytes used by the index/the number of bytes
available. (The percentage of space used by the index.)
- Keys
The sum of the dbkeys that point directly to data records
plus those that point to duplicate nodes.
- Records
The number of data records to which the Keys (in the
previous list item) point directly.
o The fourth line of output:
- Duplicate nodes
For hashed and nonranked sorted indexes, this is the number
of duplicate nodes in the index. For a ranked sorted index,
this is the number of overflow nodes. With ranked sorted
indexes, Oracle Rdb compresses duplicates using a byte-
aligned bitmap compression. It compresses the list of
dbkeys that point to duplicates and stores that list in
the index key node. Oracle Rdb creates overflow nodes when
the compressed list of duplicates does not fit in one index
key node. This overflow node contains a bitmap compressed
list of dbkeys and pointers to the next overflow node.
Therefore, for ranked sorted indexes, the duplicate nodes
count (overflow nodes) can be zero (0) if the compressed
list of dbkeys that point to duplicates fits into one node.
- Used/Avail (%)
The number of bytes used by duplicate nodes/number of bytes
available in the duplicate nodes. (The percentage of space
used within the duplicate nodes of the index.) This value
can be zero (0) for a ranked sorted index if the number of
duplicate nodes is zero.
- Keys
The total number of dbkeys that point to a duplicate node
or that point to the beginning of a duplicate node chain in
the index.
- Maps (appears only if the index is a ranked sorted index)
The number of duplicate key data record bit maps used by
ranked sorted indexes to represent the duplicate index key
data record dbkeys.
- Records
The total number of data records pointed to by duplicate
nodes. If the index is a ranked sorted index, Records
refers to the number of data records pointed to by
duplicate bit maps.
o The fifth line of output (appears only if the index is
compressed):
- Total Comp/Uncomp IKEY Size
The total byte count of the compressed leaf index keys
(level 1 nodes only)/the total byte count that would be
consumed if the index were not compressed
- Compression ratio.
The calculated ratio of Total Comp/Uncomp. A compression
ratio greater than 1.0 indicates that the compressed index
keys occupy more space than the uncompressed index keys.
For more information on RMU Analyze Indexes and the display
of index keys, refer to the Oracle Rdb7 Guide to Database
Performance and Tuning.
6.3.2 – Format
(B)0[mRMU/Analyze/Indexes root-file-spec [index-name[,...]]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Binary_Output[=file-option-list] x /Nobinary_Output
/Exclude = Metadata x All index data displayed
/Option = {Normal | Full | Debug} x /Option=Normal
/Output = file-name x /Output=SYS$OUTPUT
/Transaction_Type=option x /Transaction_Type=Automatic
6.3.3 – Parameters
6.3.3.1 – root-file-spec
The file specification for the database root file for which
you want information. The default file extension is .rdb. This
parameter is required.
6.3.3.2 – index-name
index-name[,...]
The name of the index for which you want information. The default
is all enabled indexes. If you want information about a disabled
index, you must specify it by name. This parameter is optional.
An indirect file reference can be used. See the Indirect-Command-
Files help entry for more information.
The wildcard characters "%" and "*" can be used in the index
name specification. The following examples demonstrate various
combinations of use of the wildcard characters.
$ RMU /ANALYZE /INDEX MF_PERSONNEL EMP*
$ RMU /ANALYZE /INDEX MF_PERSONNEL *LAST%NAME
$ RMU /ANALYZE /INDEX MF_PERSONNEL EMP%LAST%NAME
$ RMU /ANALYZE /INDEX MF_PERSONNEL *HASH, *LAST*
6.3.4 – Command Qualifiers
6.3.4.1 – Binary Output
Binary_output=file-option-list
Nobinary_Output
Specifying the Binary_Output qualifier allows you to store
the summary results in a binary file, and to create a record
definition file that is compatible with the data dictionary for
the binary output file. The binary output can be loaded into
an Oracle Rdb database by using the RMU Load command with the
Record_Definition qualifier for use by a user-written management
application or procedure. The binary output can also be used
directly by the user-written application or procedure.
The valid file options are:
o File=file-spec
The File option causes the RMU Analyze Indexes command data to
be stored in an RMS file that contains a fixed-length binary
record for each index analyzed.
The default file extension for the binary output file is .unl.
The following command creates the binary output file analyze_
out.unl:
$ RMU/ANALYZE/INDEXES -
_$ /BINARY_OUTPUT=FILE=ANALYZE_OUT MF_PERSONNEL.RDB
o Record_Definition=file-spec
The Record_Definition option causes the RMU Analyze Indexes
command data record definition to be stored in an RMS file.
The output file contains the record definition in a subset of
the data dictionary command format. The default file extension
for the record definition output file is .rrd. Refer to the
rrd_file_syntax help topic for a description of the .rrd
files. The following command creates the output file analyze_
out.rrd:
$ RMU/ANALYZE/INDEXES -
_$ /BINARY_OUTPUT=RECORD_DEFINITION=ANALYZE_OUT MF_PERSONNEL.RDB
You can specify both file options in one command by separating
them with a comma and enclosing them within parentheses, as
follows:
$ RMU/ANALYZE/INDEXES/BINARY_OUTPUT= -
_$ (FILE=ANALYZE_OUT,RECORD_DEFINITION=ANALYZE_OUT) -
_$ MF_PERSONNEL.RDB
If you specify the Binary_Output qualifier, you must specify
at least one of the options. The default is the Nobinary_Output
qualifier, which does not create an output file.
6.3.4.2 – Exclude
Exclude=Metadata
Excludes information from the RMU Analyze Indexes command output.
When you specify the Exclude=Metadata qualifier, information on
the Oracle Rdb indexes (for example, the RDB$NDX_REL_NAME_NDX and
RDB$COLLATIONS_NDX indexes) is excluded from the RMU Analyze
Indexes command output. When you do not specify the Exclude
qualifier, data is provided for all indexes in the database.
Data is accumulated for the indexes excluded with the Exclude
qualifier, but the data is excluded from the RMU Analyze Indexes
command output.
You cannot specify the Exclude qualifier and one or more index
names in the same RMU Analyze Indexes command.
6.3.4.3 – Option
Option=type
Specifies the type of information and the level of detail the
analysis will include. Three types of output are available:
o Normal
Output includes only summary information. The Normal option is
the default.
o Full
Output includes histograms and summary information. This
option displays a summary line for each sorted index level.
o Debug
Output includes internal information about the data,
histograms, and summary information. Note the following when
using this option to analyze compressed index keys:
- The key lengths are from the compressed index keys.
- The hexadecimal output for the keys is that of the
uncompressed index keys.
- The output includes summary statistics about the compressed
index keys.
In general, use the Debug option for diagnostic support
purposes. You can also use the Debug option to extract data
and perform an independent analysis.
6.3.4.4 – Output
Output=file-name
Specifies the name of the file where output will be sent. The
default is SYS$OUTPUT. The default output file extension is .lis,
if you specify a file name.
6.3.4.5 – Transaction Type
Transaction_Type=option
Allows you to specify the transaction mode for the transactions
used to perform the analyze operation. Valid options are:
o Automatic
o Read_Only
o Noread_Only
You must specify an option if you use this qualifier.
If you do not use any form of this qualifier, the Transaction_
Type=Automatic qualifier is the default. This qualifier specifies
that Oracle RMU is to determine the transaction mode used for the
analyze operation. If any storage area in the database (including
those not accessed for the analyze operation) has snapshots
disabled, the transactions used for the analyze operation are
set to read/write mode. Otherwise, the transactions are set to
read-only mode.
The Transaction_Type=Read_Only qualifier specifies the
transactions used to perform the analyze operation be set to
read-only mode. When you explicitly set the transaction type to
read-only, snapshots need not be enabled for all storage areas
in the database, but must be enabled for those storage areas that
are analyzed. Otherwise, you receive an error and the analyze
operation fails.
You might select this option if not all storage areas have
snapshots enabled and you are analyzing objects that are stored
only in storage areas with snapshots enabled. In this case, using
the Transaction_Type=Read_Only qualifier allows you to perform
the analyze operation and impose minimal locking on other users
of the database.
The Transaction_Type=Noread_Only qualifier specifies that
the transactions used to for the analyze operation be set to
read/write mode. You might select this option if you want to
eradicate the growth of snapshot files that occurs during a read-
only transaction and are willing to incur the cost of increased
locking that occurs during a read/write transaction.
6.3.5 – Usage Notes
o To use the RMU Analyze Indexes command for a database, you
must have the RMU$ANALYZE privilege in the root file access
control list (ACL) for the database or the OpenVMS SYSPRV or
BYPASS privilege.
o When the RMU Analyze Indexes command is issued for a closed
database, the command executes without other users being able
to attach to the database.
o The following RMU Analyze Indexes command produces an RMS
record definition file called index.rrd that can be read by
the RMU Load command and the data dictionary:
$ RMU/ANALYZE/INDEX/BINARY_OUTPUT=RECORD_DEFINITION=INDEX.RRD -
_$ MF_PERSONNEL
$!
$! Display the index.rrd file created by the previous command:
$ TYPE INDEX.RRD
DEFINE FIELD RMU$DATE DATATYPE IS DATE.
DEFINE FIELD RMU$INDEX_NAME DATATYPE IS TEXT SIZE IS 32.
DEFINE FIELD RMU$RELATION_NAME DATATYPE IS TEXT SIZE IS 32.
DEFINE FIELD RMU$LEVEL DATATYPE IS SIGNED WORD.
DEFINE FIELD RMU$FLAGS DATATYPE IS SIGNED WORD.
DEFINE FIELD RMU$COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$USED DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$AVAILABLE DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DUPLICATE_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DUPLICATE_USED DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DUPLICATE_AVAILABLE DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$KEY_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DATA_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DUPLICATE_KEY_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DUPLICATE_DATA_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$TOTAL_COMP_IKEY_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$TOTAL_IKEY_COUNT DATATYPE IS F_FLOATING.
DEFINE RECORD RMU$ANALYZE_INDEX.
o The following list describes each of the fields in the
index.rrd record definition:
- RMU$DATE
Contains the date that the analyze operation was done
- RMU$INDEX_NAME
Contains the name of the index that was analyzed
- RMU$RELATION_NAME
Contains the name of the table for which the index is
defined
- RMU$LEVEL
Contains the maximum number of index levels
- RMU$FLAGS
The eight possible values in this field have the following
meanings:
* 0-Index is sorted and not unique. A full report is not
generated.
* 1-Index is sorted and unique. A full report is not
generated.
* 2-Index is hashed and not unique. A full report is not
generated.
* 3-Index is hashed and unique. A full report is not
generated.
* 4-Index is sorted and not unique. A full report is
generated.
* 5- Index is sorted and unique. A full report is
generated.
* 6- Index is hashed and not unique. A full report is
generated.
* 7-Index is hashed and unique. A full report is
generated.
* 8-Index is sorted ranked and not unique. A full report
is not generated.
* 9-Index is sorted ranked and unique. A full report is
not generated.
* 12-Index is sorted ranked and not unique. A full report
is generated.
* 13-Index is sorted ranked and unique. A full report is
generated.
The RMU Analyze Indexes command uses the RMU$FLAGS bits
shown in RMU$FLAGS Bits Used by the RMU Analyze Indexes
Command for describing specific index information.
Table 2 RMU$FLAGS Bits Used by the RMU Analyze Indexes Command
Bit Offset Meaning
0 Unique index if true
1 Hashed index if true
2 Full report record if true
3 Ranked index if true
When RMU$FLAGS has bit 2 set it means that a full report is
generated. A full report has records for each level of the
index.
- RMU$COUNT
Contains the number of index nodes
- RMU$USED
Contains the amount of available space that is used
- RMU$AVAILABLE
Contains the amount of space available in the index records
initially
- RMU$DUPLICATE_COUNT
Contains the number of duplicate records
- RMU$DUPLICATE_USED
Contains the amount of available space used in the
duplicate records
- RMU$DUPLICATE_AVAILABLE
Contains the amount of space available in the duplicate
records initially
- RMU$KEY_COUNT
Contains the number of keys
- RMU$DATA_COUNT
Contains the number of records
- RMU$DUPLICATE_KEY_COUNT
Contains the number of duplicate keys
- RMU$DUPLICATE_DATA_COUNT
Contains the number of duplicate records
- RMU$TOTAL_COMP_IKEY_COUNT
Contains the number of compressed index key bytes
- RMU$TOTAL_IKEY_COUNT
Contains the number of bytes that would be used by index
keys, had they not been compressed
6.3.6 – Examples
Example 1
The following command analyzes the JH_EMPLOYEE_ID and SH_
EMPLOYEE_ID indexes in the mf_personnel database:
$ RMU/ANALYZE/INDEXES MF_PERSONNEL.RDB JH_EMPLOYEE_ID,SH_EMPLOYEE_ID -
_$ /OUTPUT=EMP_ID_INDEX.LIS
Example 2
The following commands demonstrate the differences you see
when you analyze a nonranked sorted index and a ranked sorted
index. Note the differences in the values for the Duplicate
nodes. The nonranked sorted index displays 80 duplicate nodes.
The ranked sorted index (before more duplicates are added)
displays 0 duplicate nodes for the same data. After hundreds
of more duplicates are added, the ranked sorted index shows only
3 duplicate nodes. The differences you see are because of the
different way duplicate records are stored for nonranked sorted
indexes and ranked sorted indexes. See the Description help entry
under this command for details on these differences.
$ ! Analyze a nonranked sorted index:
$ !
$ RMU/ANALYZE/INDEXES MF_PERSONNEL.RDB JH_EMPLOYEE_ID
----------------------------------------------------------------------------
Indices for database - USER1:[DB]MF_PERSONNEL.RDB;1
----------------------------------------------------------------------------
Index JH_EMPLOYEE_ID for relation JOB_HISTORY duplicates allowed
Max Level: 2, Nodes: 4, Used/Avail: 768/1592 (48%), Keys: 103, Records: 20
Duplicate nodes: 80, Used/Avail: 2032/4696 (43%), Keys: 80, Records: 254
----------------------------------------------------------------------------
$ ! Analyze a ranked sorted index defined on the same column as the
$ ! nonranked sorted index:
$ RMU/ANALYZE/INDEXES MF_PERSONNEL.RDB JH_EMPLOYEE_ID_RANKED
----------------------------------------------------------------------------
Indices for database - USER1:[DB]MF_PERSONNEL.RDB;1
----------------------------------------------------------------------------
Index JH_EMPLOYEE_ID_RANKED for relation JOB_HISTORY duplicates allowed
Max Level: 2, Nodes: 11, Used/Avail: 2318/4378 (53%), Keys: 110, Records: 20
Duplicate nodes: 0, Used/Avail: 0/0 (0%), Keys: 80, Maps: 80, Records: 254
----------------------------------------------------------------------------
$ !
$ ! Insert many duplicates and analyze the ranked sorted index again:
$ !
$ RMU/ANALYZE/INDEXES MF_PERSONNEL.RDB JH_EMPLOYEE_ID_RANKED
----------------------------------------------------------------------------
Indices for database - USER1:[DB]MF_PERSONNEL.RDB;1
----------------------------------------------------------------------------
Index JH_EMPLOYEE_ID_RANKED for relation JOB_HISTORY duplicates allowed
Max Level: 2, Nodes: 13, Used/Avail: 2705/5174 (52%), Keys: 112, Records: 20
Duplicate nodes:3, Used/Avail:850/1194 (71%), Keys:80, Maps: 83, Records:2964
----------------------------------------------------------------------------
6.4 – Placement
Generates a formatted display of statistical information
describing the row placement relative to the index structures
for the database.
6.4.1 – Description
The RMU Analyze Placement command provides a maintenance tool
for analyzing row placement relative to index structures and
generates a formatted display of this statistical information.
Information is displayed selectively for any specified storage
area.
You can use the RMU Analyze Placement command to determine:
o The maximum and average path length to a data record. (The
maximum and average number of records touched to reach a data
record.)
o The estimated maximum I/O path length to a data record.
o The estimated minimum I/O path length to a data record.
o The frequency distributions for the database key (dbkey)
path lengths, maximum I/O path lengths, and minimum I/O path
lengths for specified indexes.
o The distribution of data records on data pages in a storage
area by logical area identifier (ID) and dbkey, the number
of dbkeys needed to reach each data record, the maximum and
minimum I/O path lengths needed to reach the data record, and
the specific dbkey for the data record.
6.4.2 – Format
(B)0[mRMU/Analyze/Placement root-file-spec [index-name[,...]]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Areas[=storage-area-list] x /Areas
/[No]Binary_Output[=file-option-list] x /Nobinary_Output
/Exclude = Metadata x All index data displayed
/Option = {Normal | Full | Debug} x /Option = Normal
/Output=file-name x /Output = SYS$OUTPUT
/Transaction_Type=option x /Transaction_Type=Automatic
6.4.3 – Parameters
6.4.3.1 – root-file-spec
The file specification for the database root file to be analyzed.
The default file extension is .rdb.
6.4.3.2 – index-name
index-name[,...]
The name of the index for which you want information. The default
is all enabled indexes. If you want information about a disabled
index, you must specify it by name. This parameter is optional.
An indirect file reference can be used.
6.4.4 – Command Qualifiers
6.4.4.1 – Areas
Areas[=storage-area-list]
Areas=*
Specifies the storage areas to be analyzed. You can specify each
storage area by name or by the area's ID number.
If you are interested in the placement information for a
particular index, specify the area where the data resides, not
where the index resides. For example, if you are interested in
the placement information for the SH_EMPLOYEE_ID index of the
mf_personnel database, you should specify SALARY_HISTORY as the
storage area (which is where the data resides), not RDB$SYSTEM
(which is where the index resides).
If you do not specify the Areas qualifier, or if you specify
the Areas qualifier but do not provide a storage-area-list,
information for all the storage areas is displayed.
If you specify more than one storage area, separate the storage
area names or ID numbers in the storage-area-list with a comma
and enclose the list within parentheses.
If you specify more than one storage area with the Areas
qualifier, the analysis Oracle RMU provides is a summary for
all the specified areas. The analysis is not broken out into
separate sections for each specified storage area. To get index
information for a specific storage area, issue the RMU Analyze
Placement command, specifying only that area with the Areas
qualifier.
The Areas qualifier can be used with an indirect file reference.
See the Indirect-Command-Files help entry for more information.
The Areas qualifier (without a storage-area-list) is the default.
6.4.4.2 – Binary Output
Binary_Output[=file-option-list]
Nobinary_Output
Specifying the Binary_Output qualifier allows you to store
the summary results in a binary file, and to create a record
definition file that is compatible with the data dictionary for
the binary output file. The binary output file can be loaded
into an Oracle Rdb database by using the RMU Load command with
the Record_Definition qualifier that can then be used by a user-
written management application or procedure. The binary output
can also be used directly by the user-written application or
procedure.
The valid file options are:
o File=file-spec
The File option causes the RMU Analyze Placement command data
to be stored in an RMS file that contains a fixed-length
binary record for each index analyzed. The default file
extension for the binary output file is .unl. The following
command creates the binary output file analyze_out.unl:
$ RMU/ANALYZE/PLACEMENT -
_$ /BINARY_OUTPUT=FILE=ANALYZE_OUT MF_PERSONNEL.RDB
o Record_Definition=file-spec
The Record_Definition option causes the RMU Analyze Placement
command data record definition to be stored in an RMS file.
The output file contains the record definition in a subset of
the data dictionary command format. The default file extension
for the record definition output file is .rrd. Refer to the
rrd_file_syntax help topic for a description of .rrd files.
The following command creates the output file analyze_out.rrd:
$ RMU/ANALYZE/PLACEMENT -
_$ /BINARY_OUTPUT=RECORD_DEFINITION=ANALYZE_OUT MF_PERSONNEL.RDB
You can specify both file options in one command by separating
them with a comma and enclosing them within parentheses, as
follows:
$ RMU/ANALYZE/PLACEMENT/BINARY_OUTPUT= -
_$ (FILE=ANALYZE_OUT,RECORD_DEFINITION=ANALYZE_OUT) -
_$ MF_PERSONNEL.RDB
The default is the Nobinary_Output qualifier, which does not
create an output file.
6.4.4.3 – Exclude
Exclude=Metadata
Excludes information from the RMU Analyze Placement command data.
When you specify the Exclude=Metadata qualifier, information on
all the Oracle Rdb indexes (for example, the RDB$NDX_REL_NAME_NDX
and RDB$COLLATIONS_NDX indexes) is excluded from the RMU Analyze
Placement command output. When you do not specify the Exclude
qualifier, data is provided for all indexes in the database.
Data is accumulated for the indexes excluded with the Exclude
qualifier, but the data is excluded from the RMU Analyze
Placement command output.
You cannot specify the Exclude qualifier and one or more index
names in the same RMU Analyze Placement command.
6.4.4.4 – Option
Option=type
Specifies the type of information and level of detail the
analysis will include. Three types of output are available:
o Normal
Output includes only summary information. Normal is the
default.
o Full
Output includes histograms and summary information.
o Debug
Output includes internal information about the data,
histograms, and summary information. Output also displays
uncompressed index keys from compressed indexes. The
hexadecimal output is that of the uncompressed index key.
However, the lengths shown are of the compressed index key.
For more information on RMU Analyze Placement and the display
of index keys, refer to the Oracle Rdb7 Guide to Database
Performance and Tuning.
6.4.4.5 – Output
Output=file-name
Specifies the name of the file where output will be sent. The
default file type is .lis. If you do not specify the Output
qualifier, the default output is SYS$OUTPUT.
6.4.4.6 – Transaction Type
Transaction_Type=option
Allows you to specify the transaction mode for the transactions
used to perform the analyze operation. Valid options are:
o Automatic
o Read_Only
o Noread_Only
You must specify an option if you use this qualifier.
If you do not use any form of this qualifier, the Transaction_
Type=Automatic qualifier is the default. This qualifier specifies
that Oracle RMU is to determine the transaction mode used for the
analyze operation. If any storage area in the database (including
those not accessed for the analyze operation) has snapshots
disabled, the transactions used for the analyze operation are
set to read/write mode. Otherwise, the transactions are set to
read-only mode.
The Transaction_Type=Read_Only qualifier specifies the
transactions used to perform the analyze operation be set to
read-only mode. When you explicitly set the transaction type to
read-only, snapshots need not be enabled for all storage areas
in the database, but must be enabled for those storage areas that
are analyzed. Otherwise, you receive an error and the analyze
operation fails.
You might select this option if not all storage areas have
snapshots enabled and you are analyzing objects that are stored
only in storage areas with snapshots enabled. In this case, using
the Transaction_Type=Read_Only qualifier allows you to perform
the analyze operation and impose minimal locking on other users
of the database.
The Transaction_Type=Noread_Only qualifier specifies that the
transactions used for the analyze operation be set to read/write
mode. You might select this option if you want to eradicate
the growth of snapshot files that occurs during a read-only
transaction and are willing to incur the cost of increased
locking that occurs during a read/write transaction.
6.4.5 – Usage Notes
o To use the RMU Analyze Placement command for a database, you
must have the RMU$ANALYZE privilege in the root file ACL for
the database or the OpenVMS SYSPRV or BYPASS privilege.
o When the RMU Analyze Placement command is issued for a closed
database, the command executes without other users being able
to attach to the database.
o The following RMU Analyze Placement command directs
the results into an RMS record definition file called
placement.rrd that is compatible with the data dictionary:
$ RMU/ANALYZE/PLACEMENT/BINARY_OUTPUT=RECORD_DEFINITION=PLACEMENT.RRD -
_$ MF_PERSONNEL
$!
$! Display the placement.rrd file created by the previous command:
$ TYPE PLACEMENT.RRD
DEFINE FIELD RMU$DATE DATATYPE IS DATE.
DEFINE FIELD RMU$INDEX_NAME DATATYPE IS TEXT SIZE IS 32.
DEFINE FIELD RMU$RELATION_NAME DATATYPE IS TEXT SIZE IS 32.
DEFINE FIELD RMU$LEVEL DATATYPE IS SIGNED WORD.
DEFINE FIELD RMU$FLAGS DATATYPE IS SIGNED WORD.
DEFINE FIELD RMU$COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DUPLICATE_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$KEY_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DUPLICATE_KEY_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DATA_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$DUPLICATE_DATA_COUNT DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$TOTAL_KEY_PATH DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$TOTAL_PAGE_PATH DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$TOTAL_BUFFER_PATH DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$MAX_KEY_PATH DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$MAX_PAGE_PATH DATATYPE IS F_FLOATING.
DEFINE FIELD RMU$MIN_BUF_PATH DATATYPE IS F_FLOATING.
DEFINE RECORD RMU$ANALYZE_PLACEMENT.
o The following list describes each of the fields in the
placement.rrd record definition:
- RMU$DATE
Contains the date that the analyze operation was done
- RMU$INDEX_NAME
Contains the name of the index that was analyzed
- RMU$RELATION_NAME
Contains the name of the table for which the index is
defined
- RMU$LEVEL
Contains the maximum number of index levels
- RMU$FLAGS
The six possible values in this field have the following
meanings:
* 0-Index is a sorted and not unique index
* 1-Index is sorted and unique
* 2-Index is hashed and not unique
* 3-Index is hashed and unique
* 4-Index is ranked sorted and not unique
* 5-Index is ranked sorted and unique
The RMU Analyze Placement command uses the RMU$FLAGS bits
shown in RMU$FLAGS Bits Used by the RMU Analyze Placement
Command for describing specific index information.
Table 3 RMU$FLAGS Bits Used by the RMU Analyze Placement Command
Bit Offset Meaning
0 Unique index if true
1 Hashed index if true
2 Ranked sorted index if true
- RMU$COUNT
Contains the number of index nodes
- RMU$DUPLICATE_COUNT
Contains the number of duplicate records
- RMU$KEY_COUNT
Contains the number of keys
- RMU$DUPLICATE_KEY_COUNT
Contains the number of duplicate keys
- RMU$DATA_COUNT
Contains the number of records
- RMU$DUPLICATE_DATA_COUNT
Contains the number of duplicate records
- RMU$TOTAL_KEY_PATH
Contains the total number of keys touched to access all the
records
- RMU$TOTAL_PAGE_PATH
Contains the total number of pages touched to access all
the records
- RMU$TOTAL_BUFFER_PATH
Contains the total number of buffers touched to access all
the records
- RMU$MAX_KEY_PATH
Contains the largest number of keys touched to access any
of the records
- RMU$MAX_PAGE_PATH
Contains the largest number of pages touched to access any
of the records
- RMU$MIN_BUF_PATH
Contains the smallest number of buffers touched to access
any of the records
6.4.6 – Examples
Example 1
The following command provides information on row storage
relative to the DEPARTMENTS_INDEX index of the sample personnel
database:
$ RMU/ANALYZE/PLACEMENT MF_PERSONNEL.RDB DEPARTMENTS_INDEX
7 – Backup
There are three RMU Backup commands, as follows:
o An RMU Backup command without the After_Journal qualifier
creates a database backup file.
o An RMU Backup command with the After_Journal qualifier creates
a backup of the after-image journal (.aij) file. The .aij
can reside on disk or on tape. The RMU Backup command with
the After_Journal qualifier supports a two-stage journaling
technique that saves disk space and creates a backup journal
on tape.
o An RMU Backup command with the Plan qualifier allows you to
execute a List_Plan previously created with a parallel backup
operation. This form of the Backup command does not accept a
database name as a parameter. Instead, it requires the name of
a list plan.
7.1 – Database
Creates a backup copy of the database and places it in a file. If
necessary, you can later use the RMU Restore command to restore
the database to the condition it was in at the time of the backup
operation.
7.1.1 – Description
The RMU Backup command copies information contained in a database
to a file. It provides a number of options that allow you to
determine the following:
o Whether to perform a parallel backup operation.
When you specify a parallel backup operation, you must back up
to tape or multiple disks. The Parallel Backup Monitor allows
you to monitor the progress of a parallel backup operation.
o Whether to back up the database to disk or tape.
o The extent (how much of the database) to back up.
The backup operation uses a multithreaded process to optimize
the performance of the backup operation. See the Oracle Rdb
Guide to Database Maintenance for a complete description of how
multithreading works.
A parallel backup operation, in addition to using multithreaded
processes, uses a coordinator executor and multiple worker
executors (subprocesses) to enhance the speed of the backup
operation. You can also direct each worker executor to run on
a different node within a cluster to further enhance the speed
of the operation. You must have Oracle SQL/Services installed and
running to perform a parallel backup operation.
See the Oracle Rdb Guide to Database Maintenance for information
on when a parallel backup operation is most useful.
Use the Parallel qualifier to indicate to Oracle RMU that you
want to perform a parallel backup operation. Use the Noexecute
and List_Plan qualifiers to generate a Backup plan file. A Backup
plan file records the backup options and specifications you enter
on the command line in a text file. You can edit this text file
to fine-tune your parallel backup operation and execute it, as
needed, with the RMU Backup Plan command. Use the Statistics
option to the Parallel qualifier if you want to monitor the
progress of the parallel backup operation with the Parallel
Backup Monitor. See the description of the Parallel, List_Plan,
and Noexecute qualifiers, and the RMU Backup Plan command for
details.
You cannot use the Parallel Backup Monitor to monitor the
progress of a non-parallel backup operation. However, you can
achieve a close approximation of this by specifying the Executor_
Count=1 and the Statistics options with the Parallel qualifier.
This results in a parallel backup operation with one executor
and one controller that you can monitor with the Parallel Backup
Monitor.
Both parallel and non-parallel backup operations allow you to
perform different types of backup operations with respect to the
portions of the database to be backed up, as described in RMU
Backup Options.
Table 4 RMU Backup Options
Storage Area Selection
Database
Page Complete By-Area
Selection (All Areas) (Selected Areas)
Full Copies the database root Copies the database
(.rdb) file and all the root (.rdb) file and
database pages in all backs up only the
the storage areas in the database pages in the
database. This is the storage areas that you
default backup operation. specify on the backup
Note that you must use command line. All the
this type of backup prior storage areas in the
to upgrading to a newer database are backed
version of Oracle Rdb. up only if you specify
Because this is the them all (or perform
default operation, no a full and complete
qualifiers are needed to backup operation). Use
specify a full backup. the Include or Exclude
qualifiers to specify
the storage areas for
a full by-area backup
operation.
Incremental Copies all database pages Copies the database
that have been updated root (.rdb) file and
since the latest full only the database
backup operation and pages for the
the database root file. specified storage
Use the Incremental (or areas that have
Incremental=Complete) changed since the
qualifier to specify an latest full backup
incremental and complete operation. Use the
backup operation. Include or Exclude
qualifier along with
the Incremental=By_
Area qualifier
to specify an
incremental, by-area,
backup operation.
Oracle Corporation recommends that you use a full backup
operation to back up a database if you have made changes in the
physical or logical design. Performing an incremental backup
operation under these circumstances can lead to the inability to
recover the database properly.
If you choose to perform a by-area backup operation, your
database can be fully recovered after a system failure only
if after-image journaling is enabled on the database. If your
database has both read/write and read-only storage areas but does
not have after-image journaling enabled, you should do complete
backup operations (backup operations on all the storage areas
in the database) at all times. Doing complete backup operations
when after-image journaling is not enabled ensures that you can
recover the entire database to its condition at the time of the
previous backup operation.
When a full backup file is created for one or more storage
areas, the date and time of the last full backup file created
for those storage areas (as recorded in the backup (.rbf) file)
is updated. You can display the date and time of the last full
backup operation on each of the storage areas in a database by
executing an RMU Dump command with the Header qualifier on the
latest backup (.rbf) file for the database. The date and time
displayed by this command is the date and time of the last full
backup operation performed for the area.
Note that an incremental backup operation on a storage area does
not update the date and time for the last full backup operation
performed on the storage area that is recorded in the backup
file.
In the event of subsequent damage to the database, you can
specify backup files in an RMU Restore command to restore the
database to the condition it was in when you backed it up.
The RMU Backup command writes backup files in compressed format
to save space. Available or free space in the database root
(.rdb) file and on each database page in a storage area (.rda)
file is not written to the backup file.
NOTE
Use only the RMU Backup command to back up all Oracle Rdb
databases. Do not back up a database by using any other
method (such as the DCL BACKUP command). The database root
of a database is updated only when the RMU Backup command is
used.
For detailed information on backing up a database to tape, see
the Oracle Rdb Guide to Database Maintenance.
7.1.2 – Format
(B)0[mRMU/Backup root-file-spec backup-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
/[No]Accept_Label x /Noaccept_Label
/[No]Acl x /Acl
/Active_IO=max-writes x /Active_IO=3
/Allocation=blocks x None
/Block_Size=integer x See description
/[No]Checksum_Verification x /Checksum_Verification
/[No]Compression[=options] x /Nocompression
/Crc[=Autodin_II] x See description
/Crc=Checksum x See description
/Nocrc x See description
/[No]Database_Verification x /Database_Verification
/Density=(density-value,[No]Compaction) x See description
/Disk_File[=options] x None
/Encrypt=({Value=|Name=}[,Algorithm=]) x See description
/Exclude[=storage-area[,...] ] x See description
/[No]Execute x See description
/Extend_Quantity=number-blocks x /Extend_Quantity=2048
/[No]Group_Size=interval x See description
/Include[=storage-area[,...] ] x See description
(B)0[m/[No]Incremental x /Noincremental
/Incremental={By_area|Complete} x None
/Journal=file-name x See description
/Label=(label-name-list) x See description
/Librarian[=options] x None
/List_Plan=output-file x See description
/Loader_Synchronization[=Fixed] x See description
/Lock_Timeout=seconds x See description
/[No]Log[=Brief|Full] x Current DCL verify switch
/Master x See description
/[No]Media_Loader x See description
/No_Read_Only x See description
/[No]Record x Record
/[No]Online x /Noonline
/Owner=user-id x See description
/Page_Buffers=number-buffers x /Page_Buffers=2
/Parallel=(Executor_Count=n[,options]) x See description
/Prompt={Automatic|Operator|Client} x See description
/Protection[=file-protection] x See description
/[No]Quiet_Point x /Quiet_Point
(B)0[m/Reader_Thread_Ratio=integer x See description
/Restore_Options=file-name x None
/[No]Rewind x /Norewind
/[No]Scan_Optimization x See description
/Tape_Expiration=date-time x The current time
/Threads=n x See description
7.1.3 – Parameters
7.1.3.1 – root-file-spec
The name of the database root file. The root file name is also
the name of the database. The default file extension is .rdb.
7.1.3.2 – backup-file-spec
The file specification for the backup file. The default file
extension is .rbf. Depending on whether you are performing a
backup operation to magnetic tape, disk, or multiple disks, the
backup file specification should be specified as follows:
o If you are backing up to magnetic tape
- Oracle Corporation recommends that you supply a backup
file name that is 17 or fewer characters in length. File
names longer than 17 characters might be truncated. See
the Usage_Notes help entry under this command for more
information about backup file names that are longer than 17
characters.
- If you use multiple tape drives, the backup-file-spec
parameter must be provided with (and only with) the first
tape drive name. Additional tape drive names must be
separated from the first and subsequent tape drive names
with commas.
See the Oracle Rdb Guide to Database Maintenance for more
information about using multiple tape drives.
o If you are backing up to multiple or single disk files
- It is good practice to write backup files to a device other
than the devices where the database root, storage area, and
snapshot files of the database are located. This way, if
there is a problem with the database disks, you can still
restore the database from a backup file.
- If you use multiple disk files, the backup-file-spec
parameter must be provided with (and only with) the first
disk device name. Additional disk device names must be
separated from the first and subsequent disk device names
with commas. You must include the Disk_File qualifier. For
example:
$ RMU/BACKUP/DISK_FILE MF_PERSONNEL.RDB -
_$ DEVICE1:[DIRECTORY1]MFP.RBF,DEVICE2:[DIRECTORY2]
As an alternative to listing the disk device names on
the command line (which, if you use several devices, can
exceed the line-limit length for a command line), you can
specify an options file in place of the backup-file-spec.
For example:
$ RMU/BACKUP/DISK_FILE LARGE_DB "@DEVICES.OPT"
The contents of devices.opt might appear as follows:
DEVICE1:[DIRECTORY1]LARGE_DB.RBF
DEVICE2:[DIRECTORY2]
The resulting backup files created from such an options
file would be:
DISK1:[DIRECTORY1]LARGE_DB.RBF
DISK2:[DIRECTORY2]LARGE_DB01.RBF
Note that the same directory must exist on each device
before you issue the command. Also, if you forget to
specify the Disk_File qualifier, you receive an error
message similar to the following:
$ RMU/BACKUP MF_PERSONNEL DEVICE1:[DIRECTORY1]MFP.RBF, -
_$ DEVICE2:[DIRECTORY2]
%RMU-F-NOTBACFIL, DEVICE1:[DIRECTORY1]MFP.RBF; is not a valid
backup file
%RMU-F-FTL_BCK,Fatal error for BACKUP operation at 2-MAY-2001
09:44:57.04
7.1.4 – Command Qualifiers
7.1.4.1 – Accept Label
Accept_Label
Specifies that RMU Backup should keep the current tape label it
finds on a tape during a backup operation even if that label
does not match the default label or that specified with the
Label qualifier. Operator notification does not occur unless
the tape's protection, owner, or expiration date prohibit writing
to the tape. However, a message is logged (assuming logging is
enabled) and written to the backup journal file (assuming you
have specified the Journal qualifier) to indicate that a label is
being preserved and which drive currently holds that tape.
This qualifier is particularly useful when your backup operation
employs numerous previously used (and thus labeled) tapes and
you want to preserve the labels currently on the tapes. However,
you are responsible for remembering the order in which tapes were
written. For this reason, it is a good idea to use the Journal
qualifier when you use the Accept_Label qualifier.
If you do not specify this qualifier, the default behavior of RMU
Backup is to notify the operator each time it finds a mismatch
between the current label on the tape and the default label (or
the label you specify with the Label qualifier).
See the description of the Labels qualifier under this command
for information on default labels. See How Tapes are Relabeled
During a Backup Operation in the Usage_Notes help entry under
this command for a summary of which labels are applied under a
variety of circumstances.
7.1.4.2 – Acl
Acl
Noacl
Specifies whether to back up the root file access control list
(ACL) for a database when you back up the database. The root file
ACL controls users privileges to issue Oracle RMU commands.
If you specify the Acl qualifier, the root file ACL will be
backed up with the database.
If you specify the Noacl qualifier, the root file ACL will not
be backed up with the database. The Noacl qualifier can be
useful if you plan to restore the database on a system where
the identifiers in the current root file ACL will not be valid.
The default is the Acl qualifier.
7.1.4.3 – Active IO
Active_IO=max-writes
Specifies the maximum number of write operations to a backup
device that the RMU Backup command will attempt simultaneously.
This is not the maximum number of write operations in progress;
that value is the product of active system I/O operations and the
number of devices being written to simultaneously.
The value of the
Active_IO qualifier can range from 1 to 5. The default value
is 3. Values larger than 3 can improve performance with some tape
drives.
7.1.4.4 – Allocation
Allocation=blocks
Specifies the size, in blocks, which the backup file is initially
allocated. The minimum value for the number-blocks parameter is
1; the maximum value allowed is 2147483647. If you do not specify
the Allocation_Quantity qualifier, the Extend_Quantity value
effectively controls the file's initial allocation.
This qualifier cannot be used with backup operations to tape.
7.1.4.5 – Block Size
Block_Size=integer
Specifies the maximum record size for the backup file. The size
can vary between 2048 and 65,024 bytes. The default value is
device dependent. The appropriate block size is a compromise
between tape capacity and error rate. The block size you specify
must be larger than the largest page length in the database.
7.1.4.6 – Checksum Verification
Checksum_Verification
Nochecksum_Verification
The Checksum_Verification qualifier requests that the RMU Backup
command verify the checksum stored on each database page before
the backup operation is applied, thereby providing end-to-end
error detection on the database I/O. The default value is
Checksum_Verification.
Oracle Corporation recommends that you accept this default
behavior for your applications. The default behavior prevents
you from including corrupt database pages in backup files
and optimized .aij files. Without the checksum verifications,
corrupt data pages in these files are not detected when the files
are restored. The corruptions on the restored page may not be
detected until weeks or months after the backup file is created,
or it is possible the corruption may not be detected at all.
The Checksum_Verification qualifier uses additional CPU resources
but provides an extra measure of confidence in the quality of the
data that is backed up.
Note that if you specify the Nochecksum qualifier, and undetected
corruptions exist in your database, the corruptions are included
in your backup file and are restored when you restore the backup
file. Such a corruption might be difficult to recover from,
especially if it is not detected until long after the restore
operation is performed.
7.1.4.7 – Compression
Compression=LZSS
Compression=Huffman
Compression=ZLIB=level
Nocompression
Allows you to specify the compression method to use before
writing data to the backup file. This reduces performance, but
may be justified when the backup file is a disk file, or is being
backed up over a busy network, or is being backed up to a tape
drive that does not do its own compression. You probably do not
want to specify the Compression qualifier when you are backing up
a database to a tape drive that does its own compression; in some
cases doing so can actually result in a larger file.
If you specify the Compression qualifier without a value, the
default is COMPRESSION=ZLIB=6.
The level value (ZLIB=level) is an integer between 1 and 9
specifying the relative compression level with one being the
least amount of compression and nine being the greatest amount
of compression. Higher levels of the compression use increased
CPU time while generally providing better compression. The
default compression level of 6 is a balance between compression
effectiveness and CPU consumption.
OLDER ORACLE RDB 7.2 RELEASES AND COMPRESSED RBF FILES
Prior releases of Oracle Rdb are unable to read RBF files
compressed with the ZLIB algorithm. In order to read
compressed backups with Oracle Rdb 7.2 Releases prior
to V7.2.1, they must be made with /COMPRESSION=LZSS or
/COMPRESSION=HUFFMAN explicitly specified (because the
default compression algorithm has been changed from LZSS to
ZLIB). Oracle Rdb Version 7.2.1 is able to read compressed
backups using the LZSS or HUFFMAN algorithms made with prior
releases.
7.1.4.8 – Crc[=Autodin II]
CRC[=AUTODIN_II]
Uses the AUTODIN-II polynomial for the 32-bit cyclic redundancy
check (CRC) calculation and provides the most reliable
end-to-end error detection. This is the default for NRZ/PE
(800/1600 bits/inch) tape drives.
If you enter only Crc as the qualifier, RMU Backup assumes you
are specifying Crc=Autodin_II.
7.1.4.9 – Crc=Checksum
Crc=Checksum
Uses one's complement addition, which is the same computation
used to do a checksum of the database pages on disk. This is the
default for GCR (6250 bits/inch) tape drives and for TA78, TA79,
and TA81 tape drives.
The Crc=Checksum qualifier allows detection of data errors.
7.1.4.10 – Nocrc
Nocrc
Disables end-to-end error detection. This is the default for TA90
(IBM 3480 class) drives.
NOTE
The overall effect of the Crc=Autodin_II, Crc=Checksum, and
Nocrc qualifier defaults is to make tape reliability equal
to that of a disk. If you retain your tapes longer than 1
year, the Nocrc default might not be adequate. For tapes
retained longer than 1 year, use the Crc=Checksum qualifier.
If you retain your tapes longer than 3 years, you should
always use the Crc=Autodin_II qualifier.
Tapes retained longer than 5 years could be deteriorating
and should be copied to fresh media.
See the Oracle Rdb Guide to Database Maintenance for details
on using the Crc qualifiers to avoid underrun errors.
7.1.4.11 – Database Verification
Database_Verification
Nodatabase_Verification
The RMU /BACKUP command performs a limited database root
file verification at the start of the backup operation. This
verification is intended to help prevent backing up a database
with various detectable corruptions or inconsistancies of the
root file or associated database structures. However, in some
limited cases, it can be desirable to avoid these checks.
The qualifier /NODATABASE_VERIFICATION may be specified to avoid
the database root file verification at the start of the backup.
The default behavior is /DATABASE_VERIFICATION. Oracle strongly
recommends accepting the default of /DATABASE_VERIFICATION.
7.1.4.12 – Density
Density=(density-value,[No]Compaction)
Specifies the density at which the output volume is to be
written. The default value is the format of the first volume (the
first tape you mount). You do not need to specify this qualifier
unless your tape drives support data compression or more than one
recording density.
The Density qualifier is applicable only to tape drives. RMU
Backup returns an error message if this qualifier is used and the
target device is not a tape drive.
If you specify a density value, RMU Backup assumes that all tape
drives can accept that value.
If your systems are running OpenVMS versions prior to 7.2-1,
specify the Density qualifier as follows:
o For TA90E, TA91, and TA92 tape drives, specify the number in
bits per inch as follows:
- Density = 70000 to initialize and write tapes in the
compacted format.
- Density = 39872 or Density = 40000 for the noncompacted
format.
o For SCSI (Small Computer System Interface) tape drives,
specify Density = 1 to initialize and write tapes by using
the drive's hardware data compression scheme.
o For other types of tape drives you can specify a supported
density value between 800 and 160000 bits per inch.
o For all tape drives, specify Density = 0 to initialize and
write tapes at the drive's standard density.
Do not use the Compaction or NoCompaction keyword for systems
running OpenVMS versions prior to 7.2-1. On these systems,
compression is determined by the density value and cannot be
specified.
Oracle RMU supports the OpenVMS tape density and compression
values introduced in OpenVMS Version 7.2-1. The following table
lists the added density values supported by Oracle RMU.
DEFAULT 800 833 1600
6250 3480 3490E TK50
TK70 TK85 TK86 TK87
TK88 TK89 QIC 8200
8500 8900 DLT8000
SDLT SDLT320 SDLT600
DDS1 DDS2 DDS3 DDS4
AIT1 AIT2 AIT3 AIT4
LTO2 LTO3 COMPACTION NOCOMPACTION
If the OpenVMS Version 7.2-1 density values and the previous
density values are the same (for example, 800, 833, 1600, 6250),
the specified value is interpreted as an OpenVMS Version 7.2-1
value if the tape device driver accepts them, and as a previous
value if the tape device driver accepts previous values only.
For the OpenVMS Version 7.2-1 values that accept tape compression
you can use the following syntax:
/DENSITY = (new_density_value,[No]Compaction)
In order to use the Compaction or NoCompaction keyword, you must
use one of the following density values that accepts compression:
DEFAULT 3480 3490E 8200
8500 8900 TK87 TK88
TK89 DLT8000 SDLT SDLT320
AIT1 AIT2 AIT3 AIT4
DDS1 DDS2 DDS3 DDS4
SDLT600 LTO2 LTO3
Refer to the OpenVMS documentation for more information about
density values.
7.1.4.13 – Disk File
Disk_File[=(options)]
Specifies that you want to perform a multithreaded backup
operation to disk files, floppy disks, or other disks external
to the PC. You can use the following keywords with the Disk_File
qualifier:
o Writer_Threads
Specifies the number of threads that Oracle RMU should use
when performing a multithreaded backup operation to disk
files. You can specify no more than one writer thread per
device specified on the command line (or in the command
parameter options file). By default, one writer thread is
used.
This qualifier and all qualifiers that control tape operations
(Accept_Label, Density, Label, Loader_Synchronization, Master,
Media_Loader, Rewind, and Tape_Expiration) are mutually
exclusive.
7.1.4.14 – Encrypt
Encrypt=({Value=|Name=}[,Algorithm=])
The Encrypt qualifier encrypts the save set file of a database
backup.
Specify a key value as a string or, the name of a predefined
key. If no algorithm name is specified the default is DESCBC.
For details on the Value, Name and Algorithm parameters see HELP
ENCRYPT.
This feature requires the OpenVMS Encrypt product to be installed
and licensed on this system.
7.1.4.15 – Exclude
Exclude[=storage-area[,...]]
Specifies the storage areas that you want to exclude from the
backup file. If you specify neither the Exclude nor the Include
qualifier with the RMU Backup command, or if you specify the
Exclude qualifier but do not specify a list of storage area
names, a full and complete backup operation is performed on the
database. This is the default behavior.
If you specify a list of storage area names with the Exclude
qualifier, RMU Backup excludes those storage areas from the
backup file and includes all of the other storage areas. If
you specify more than one database storage area in the Exclude
qualifier, place a comma between each storage area name and
enclose the list of names within parentheses.
Use the Exclude=* qualifier to indicate that you want only the
database root file to be backed up. Note that a backup file
created with the Exclude=* qualifier can be restored only with
the RMU Restore Only_Root command.
You can use an indirect command file as shown in the following
example:
$ RMU/BACKUP/EXCLUDE="@EXCLUDE_AREAS.OPT" -
_$ MF_PERSONNEL.RDB PARTIAL_MF_PERS.RBF
%RMU-I-NOTALLARE, Not all areas will be included in this backup file
See the Indirect-Command-Files help entry for more information on
indirect command files.
If you use the Exclude qualifier with a list of storage area
names, your backup file will be a by-area backup file because
the Exclude qualifier causes database storage areas to be
excluded from the backup file. The following example shows the
informational message you receive if you do not back up all of
the areas in the database:
%RMU-I-NOTALLARE, Not all areas will be included in this backup file
By using the RMU Backup and RMU Restore commands, you can back up
and restore selected storage areas of your database. This Oracle
RMU backup and restore by-area feature is designed to:
o Speed recovery when corruption occurs in some (not all) of the
storage areas of your database
o Reduce the time needed to perform backup operations because
some data (data in read-only storage areas, for example) does
not need to be backed up with every backup operation performed
on the database
If you plan to use the RMU Backup and RMU Restore commands to
back up and restore only selected storage areas for a database,
you should perform full and complete backup operations on the
database at regular intervals.
If you plan to back up and restore only selected storage areas of
a database, Oracle Corporation also strongly recommends that you
enable after-image journaling for the database. This ensures that
you can recover all of the storage areas in your database if a
system failure occurs.
If you do not have after-image journaling enabled and one or
more of the areas restored with the RMU Restore command are not
consistent with the unrestored storage areas, Oracle Rdb does
not allow any transaction to use the storage areas that are not
consistent in the restored database. In this situation, you can
return to a working database by restoring the database, using
the backup file from the last full and complete backup operation
of the database storage areas. However, any changes made to the
database since the last full and complete backup operation are
not recoverable.
If you do have after-image journaling enabled, use the
RMU Recover command (or the Restore command with the Recover
qualifier) to apply transactions from the .aij file to storage
areas that are not consistent after the RMU Restore command
completes; that is, storage areas that are not in the same state
as the rest of the restored database. You cannot use these areas
until you recover the database. When the RMU Recover command
completes, your database will be consistent and usable.
Using the Exclude or Include qualifier gives you greater
flexibility for your backup operations, along with increased
file management and recovery complexity. Users of large databases
might find the greater flexibility of the backup operation to
be worth the cost of increased file management and recovery
complexity.
You cannot specify the Exclude=area-list and Include=area-list
qualifiers in the same RMU Backup command.
7.1.4.16 – Execute
Execute
Noexecute
Use the Execute and Noexecute qualifiers with the Parallel and
List_Plan qualifiers to specify whether or not the backup plan
file is to be executed.
The following list describes the effects of using the Execute and
Noexecute qualifier:
o Execute
Creates, verifies, and executes a backup list plan
o Noexecute
Creates and verifies, but does not execute a backup list plan.
The verification determines such things as whether the storage
areas listed in the plan file exist in the database.
The Execute and Noexecute qualifiers are only valid when the
Parallel and List_Plan qualifiers are also specified.
If you specify the Execute or Noexecute qualifier without the
List_Plan and Parallel qualifiers, RMU Backup generates and
verifies a temporary backup list plan, but then deletes the
backup list plan and returns a fatal error message.
By default, the backup plan file is executed when you issue an
RMU Backup command with the Parallel and List_Plan qualifiers.
7.1.4.17 – Extend Quantity
Extend_Quantity=number-blocks
Sets the size, in blocks, by which the backup file can be
extended. The minimum value for the number-blocks parameter is
1; the maximum value is 65535. If you do not specify the Extend_
Quantity qualifier, the default number of blocks by which an
on-disk backup file can be extended is 2048 blocks.
This qualifier cannot be used with backup operations to tape.
7.1.4.18 – Group Size
Group_Size=interval
Nogroup_Size
Specifies the frequency at which XOR recovery blocks are written
to tape. The group size can vary from 0 to 100. Specifying a
group size of zero or specifying the Nogroup_Size qualifier
results in no XOR recovery blocks being written. The Group_Size
qualifier is only applicable to tape, and its default value is
10. RMU Backup returns an error message if this qualifier is used
and the target device is not a tape device.
7.1.4.19 – Include
Include[=storage-area[,...]]
Specifies storage areas that you want to include in the backup
file. If you specify neither the Include nor the Exclude
qualifier with the RMU Backup command, a full and complete
backup operation is performed on the database by default. You
can specify the Include=* qualifier to indicate that you want
all storage areas included in the backup file, but this is
unnecessary because this is the default behavior. The default
behavior is performed also when you specify the Include qualifier
without specifying a list of storage area names.
If you specify a list of storage area names with the Include
qualifier, Oracle RMU includes those storage areas in the backup
operation and excludes all of the other storage areas. If you
specify more than one database storage area in the Include
qualifier, place a comma between each storage area name and
enclose the list of names within parentheses.
You cannot specify the Exclude=area-list and Include=area-list
qualifiers in the same RMU Backup command.
If you use the Include qualifier, your backup operation will be
a by-area backup operation because the areas not specified with
the Include qualifier are excluded from the backup file. If you
do not back up all of the areas in the database, you receive the
following informational message:
%RMU-I-NOTALLARE, Not all areas will be included in this backup file
By using the RMU Backup and RMU Restore commands, you can back up
and restore selected storage areas of your database. This Oracle
RMU backup and restore by area feature is designed to:
o Speed recovery when corruption occurs in some (not all) of the
storage areas of your database
o Reduce the time needed to perform backup operations because
some data (data in read-only storage areas, for example) does
not need to be backed up with every backup operation performed
on the database
See the description of the Exclude qualifier for information on
the implications of using these commands to back up and restore
selected areas of your database.
The Include qualifier can be used with indirect file references.
See the Indirect-Command-Files help entry for more information.
7.1.4.20 – Incremental
Incremental[=By_Area or Complete]
Noincremental
Determines the extent of the backup operation to be performed.
The four possible options are:
o Noincremental
If you do not specify any of the possible Incremental
qualifier options, the default is the Noincremental qualifier.
With the Noincremental qualifier, a full backup operation is
performed on the database.
o Incremental
If you specify the Incremental qualifier, an incremental
backup of all the storage areas that have changed since the
last full and complete backup operation on the database is
performed.
o Incremental=By_Area
If you specify the Incremental=By_Area qualifier, an
incremental backup operation is performed. The Incremental=By_
Area qualifier backs up those database pages that have
changed in each selected storage area since the last full
backup operation was performed on the area. The last full
backup operation performed on the area is the later of the
following:
- The last full and complete backup operation performed on
the database
- The last full by-area backup operation performed on the
area
With an incremental by-area backup operation, each storage
area backed up might contain changes for a different time
interval, which can make restoring multiple storage areas more
complex.
o Incremental=Complete
If you specify the Incremental=Complete qualifier, an
incremental backup operation on all of the storage areas
that have changed since the last full and complete backup
operation on the database is performed. Selecting the
Incremental=Complete qualifier is the same as selecting the
Incremental qualifier.
Following a full database backup operation, each subsequent
incremental backup operation replaces all previous incremental
backup operations.
The following two messages are meant to provide an aid for
designing more effective backup strategies. They are printed
as part of the per-area summary statistics, and they provide a
guide to the incremental benefit of the incremental operation:
o "Est. cost to backup relative to a full backup is x.yy"
o "Est. cost to restore relative to a full restore is x.yy"
These estimates are only approximate and reflect the disk
input/output (I/O) cost for the backup or restore operations
of that area. Tape I/O, CPU, and all other costs are ignored.
The disk I/O costs take into account the number of I/O operations
needed and the requirement for a disk head seek to perform the
I/O. Each disk type has its own relative costs-transfer rate,
latency, seek time-and the cost of a given sequence of I/Os is
also affected by competition for the disk by other processes.
Consequently, the estimates do not translate directly into "clock
time." But they should nevertheless be useful for determining
the point at which the incremental operation is becoming less
productive.
The relative costs can vary widely, and can be much higher than
1.00. The actual cost depends on the number and location of the
pages backed up. An incremental restore operation must always
follow a full restore operation, so the actual estimate of
restoring the area is actually 1.00 higher than reported when
that full restore operation is accounted for. The guideline that
Oracle Corporation recommends is, "Perform full backup operations
when the estimated cost of a restore operation approaches 2.00."
7.1.4.21 – Journal
Journal=file-name
Allows you to specify a journal file to be used to improve
tape performance during a restore operation. (This is not to
be confused with an after-image journal file.)
As the backup operation progresses, RMU Backup creates the
journal file and writes to it a description of the backup
operation containing identification of the tape drive names and
the tape volumes and their contents. The default file extension
is .jnl.
The journal file must be written to disk; it cannot be written to
tape along with the backup file. (Although you can copy the disk
file to tape after it is written, if desired.)
This journal file is used with the RMU Restore and the RMU Dump
Backup commands to optimize their tape utilization.
7.1.4.22 – Label
Label=(label-name-list)
Specifies the 1- to 6-character string with which the volumes
of the backup file are to be labeled. The Label qualifier is
applicable only to tape volumes. You must specify one or more
label names when you use the Label qualifier.
If you do not specify the Label (or Accept_Label) qualifier,
RMU Backup labels the first tape used for a backup operation
with the first 6 characters of the backup file name. Subsequent
default labels are the first 4 characters of the backup file name
appended with a sequential number. For example, if your backup
file is my_backup.rbf, the default tape labels are my_bac, my_
b01, my_b02, and so on.
When you reuse tapes, RMU Backup compares the label currently
on the tape to the label or labels you specify with the Label
qualifier. If there is a mismatch between the existing label and
a label you specify, RMU Backup sends a message to the operator
asking if the mismatch is acceptable (unless you also specify the
Accept_Labels qualifier).
If desired, you can explicitly specify the list of tape labels
for multiple tapes. If you list multiple tape label names,
separate the names with commas and enclose the list of names
within parentheses. If you are reusing tapes be certain that
you load the tapes so that the label RMU Backup expects and the
label on each tape will match, or be prepared for a high level
of operator intervention. Alternatively, you can specify the
Accept_Label qualifier. In this case, the labels you specify with
the Label qualifier are ignored if they do not match the labels
currently on the tapes and no operator intervention occurs.
If you specify fewer labels than are needed, RMU Backup generates
labels based on the format you have specified. For example, if
you specify Label=TAPE01, RMU Backup labels subsequent tapes as
TAPE02, TAPE03, and so on up to TAPE99. Thus, many volumes can
be preloaded in the cartridge stacker of a tape drive. The order
is not important because RMU Backup relabels the volumes. An
unattended backup operation is more likely to be successful if
all the tapes used do not have to be mounted in a specific order.
Once the backup operation is complete, externally mark the tapes
with the appropriate label so that the order can be maintained
for the restore operation. Be particularly careful if you are
allowing RMU Backup to implicitly label second and subsequent
tapes and you are performing an unattended backup operation.
Remove the tapes from the drives in the order in which they
were written. Apply labels to the volumes following the logic
of implicit labeling (for example, TAPE02, TAPE03, and so on).
Oracle recommends you use the Journal qualifier when you employ
implicit labeling in a multidrive, unattended backup operation.
The journal file records the volume labels that were written
to each tape drive. The order in which the labels were written
is preserved in the journal. Use the RMU Dump Backup command to
display a listing of the volumes written by each tape drive.
You can use an indirect file reference with the Label qualifier.
See the Indirect-command-files help entry for more information.
See How Tapes are Relabeled During a Backup Operation in the
Usage_Notes help entry under this command for a summary of which
labels are applied under a variety of circumstances.
7.1.4.23 – Librarian
Librarian=options
Use the Librarian qualifier to back up files to data archiving
software applications that support the Oracle Media Management
interface. The backup file name specified on the command line
identifies the stream of data to be stored in the Librarian
utility. If you supply a device specification or a version number
it will be ignored.
You can use the Librarian qualifier for parallel backup
operations. The Librarian utility should be installed and
available on all nodes on which the parallel backup operation
executes.
The Librarian qualifier accepts the following options:
o Writer_Threads=n
Use the Writer_Threads option to specify the number of backup
data streams to write to the Librarian utility. The value of n
can be from 1 to 99. The default is one writer thread.
Each writer thread for a backup operation manages its own
stream of data. Therefore, each thread uses a unique backup
file name. The unique names are generated by incrementing the
number added to the end of the backup file name. For example,
if you specify the following Oracle RMU Backup command:
$RMU/ BACKUP /LIBRARIAN=(WRITER_THREADS=3) /LOG DB FILENAM.RBF
The following backup file data stream names are generated:
FILENAME.RBF
FILENAME.RBF02
FILENAME.RBF03
Because each data stream must contain at least one database
storage area, and a single storage area must be completely
contained in one data stream, if the number of writer threads
specified is greater than the number of storage areas, it is
set equal to the number of storage areas.
o Trace_file=file-specification
The Librarian utility writes trace data to the specified file.
o Level_Trace=n
Use this option as a debugging tool to specify the level of
trace data written by the Librarian utility. You can use a
pre-determined value of 0, 1, or 2, or a higher value defined
by the Librarian utility. The pre-determined values are :
- Level 0 traces all error conditions. This is the default.
- Level 1 traces the entry and exit from each Librarian
function.
- Level 2 traces the entry and exit from each Librarian
function, the value of all function parameters, and the
first 32 bytes of each read/write buffer, in hexadecimal.
o Logical_Names=(logical_name=equivalence-value,...)
You can use this option to specify a list of process logical
names that the Librarian utility can use to specify catalogs
or archives where Oracle Rdb backup files are stored,
Librarian debug logical names, and so on. See the specific
Librarian documentation for the definition of logical names.
The list of process logical names is defined by Oracle RMU
prior to the start of any Oracle RMU command that accesses the
Librarian utility.
The following OpenVMS logical names must be defined for use with
a Librarian utility before you execute an Oracle RMU backup or
restore operation. Do not use the Logical_Names option provided
with the Librarian qualifier to define these logical names.
o RMU$LIBRARIAN_PATH
This logical name must be defined so that the shareable
Librarian image can be loaded and called by Oracle RMU backup
and restore operations. The translation must include the file
type (for example, .exe), and must not include a version
number. The shareable Librarian image must be an installed
(known) image. See the Librarian utility documentation for
the name and location of this image and how it should be
installed. For a parallel RMU backup, define RMU$LIBRARIAN_
PATH as a system-wide logical name so that the multiple
processes created by a parallel backup can all translate the
logical.
$ DEFINE /SYSTEM /EXECUTIVE_MODE -
_$ RMU$LIBRARIAN_PATH librarian_shareable_image.exe
o RMU$DEBUG_SBT
This logical name is not required. If it is defined, Oracle
RMU will display debug tracing information messages from
modules that make calls to the Librarian shareable image.
For a parallel RMU backup, the RMU$DEBUG_SBT logical should
be defined as a system logical so that the multiple processes
created by a parallel backup can all translate the logical.
The following lines are from a backup plan file created by the
RMU Backup/Parallel/Librarian command:
Backup File = MF_PERSONNEL.RBF
Style = Librarian
Librarian_trace_level = #
Librarian_logical_names = (-
logical_name_1=equivalence_value_1, -
logical_name_2=equivalence_value_2)
Writer_threads = #
The "Style = Librarian" entry specifies that the backup is going
to a Librarian utility. The "Librarian_logical_names" entry is
a list of logical names and their equivalence values. This is an
optional parameter provided so that any logical names used by a
particular Librarian utility can be defined as process logical
names before the backup or restore operation begins. For example,
some Librarian utilities provide support for logical names for
specifying catalogs or debugging.
You cannot use device specific qualifiers such as Rewind,
Density, or Label with the Librarian qualifier because the
Librarian utility handles the storage meda, not Oracle RMU.
7.1.4.24 – List Plan
List_Plan=output-file
Specifies that RMU Backup should generate a backup plan file for
a parallel backup operation and write it to the specified output
file. A backup plan file is a text file that contains qualifiers
that can be specified on the RMU Backup command line. Qualifiers
that you do not specify on the command line appear as comments
in the backup list plan file. In addition, the backup plan file
specifies the worker executor names along with the system node,
storage areas, and tape drives assigned to each worker executor.
You can use the generated backup plan file as a starting point
for building a parallel backup operation to tape that is tuned
for your particular configuration. The output file can be
customized and then used with the RMU Backup Plan command. See
Backup Plan for details.
If you specify the Execute qualifier with the List_Plan
qualifier, the backup plan file is generated, verified, and
executed. If you specify the Noexecute qualifier with the List_
Plan qualifier, the backup plan file is generated and verified,
but not executed.
By default, the backup plan file is executed.
The List_Plan qualifier is only valid when the Parallel qualifier
is also specified.
7.1.4.25 – Loader Synchronization
Loader_Synchronization[=Fixed]
Allows you to preload tapes and preserve tape order to minimize
the need for operator support. When you specify the Loader_
Synchronization qualifier and specify multiple tape drives,
the backup operation writes to the first set of tape volumes
concurrently then waits until each tape in the set is finished
before assigning the next set of tape volumes. This ensures
that the tape order can be preserved in the event that a restore
operation from these tapes becomes necessary.
One disadvantage with using the Loader_Synchronization qualifier
with the Label qualifier is that because not all tape threads
back up equal volumes of data, some threads may not need a
subsequent tape to back up the assigned volume of data. In order
to preserve the tape order, operator intervention may be needed
to load the tapes in stages as backup threads become inactive.
Use the keyword Fixed to force the assignment of tape labels to
the drives regardless of how many tapes each drive actually uses.
The Loader_Synchronization qualifier does result in reduced
performance. For maximum performance, no drive should remain
idle, and the next identified volume should be placed on the
first drive that becomes idle. However, because the order in
which the drives become idle depends on many uncontrollable
factors and cannot be predetermined, without the Loader_
Synchronization qualifier, the drives cannot be preloaded with
tapes. (If you do not want to relabel tapes, you might find that
the Accept_Label qualifier is a good alternative to using the
Loader_Synchronization qualifier. See the description of the
Accept_Label qualifier for details.)
Because the cost of using the Loader_Synchronization qualifier is
dependent on the hardware configuration and the system load, the
cost is unpredictable. A 5% to 20% additional elapsed time for
the operation is typical. You must determine whether the benefit
of a lower level of operator support compensates for the loss of
performance. The Loader_Synchronization qualifier is most useful
for large backup operations.
See the Oracle Rdb Guide to Database Maintenance for more
information on using the Loader_Synchronization qualifier,
including information on when this qualifier might lead to
unexpected results, and details on how this qualifier interacts
with other RMU Backup command qualifiers.
For very large backup operations requiring many tape volumes,
managing the physical marking of tape volumes can be difficult.
In such a case, you might consider using a library or archiving
to automatically manage tape labeling for you.
7.1.4.26 – Lock Timeout
Lock_Timeout=seconds
Determines the maximum time the backup operation will wait for
the quiet-point lock and any other locks needed during online
backup operations. When you specify the Lock_Timeout=seconds
qualifier, you must specify the number of seconds to wait for the
quiet-point lock. If the time limit expires, an error is signaled
and the backup operation fails.
When the Lock_Timeout=seconds qualifier is not specified, the
backup operation will wait indefinitely for the quiet-point lock
and any other locks needed during an online backup operation.
The Lock_Timeout=seconds qualifier is ignored for offline backup
operations.
7.1.4.27 – Log
Log
Log=Brief
Log=Full
Nolog
Specifies whether the processing of the command is reported
to SYS$OUTPUT. Specify the Log qualifier to request that the
progress of the restore operation be written to SYS$OUTPUT,
or the Nolog qualifier to suppress this report. If you specify
the Log=Brief option, which is the default if you use the Log
option without a qualifier, the log contains the start and
completion time of each storage area. If you specify the Log=Full
option, the log also contains thread assignment and storage area
statistics messages.
If you do not specify the Log or the Nolog qualifier, the default
is the current setting of the DCL verify switch. (The DCL SET
VERIFY command controls the DCL verify switch.)
7.1.4.28 – Master
Master
Controls the assignment of tape drives to output threads by
allowing you to specify a tape drive as a master tape drive. This
is a positional qualifier specified with a tape drive. When the
Master qualifier is used, it must be used on the first tape drive
specified. When the Master qualifier is specified, all additional
tape drives become slaves for that tape drive until the end of
the command line, or until the next Master qualifier, whichever
comes first.
If you specify the Master qualifier (without also specifying the
Loader_Synchronization qualifier) on sets of tape drives, each
master/slave set of tape drives will operate independently of
other master/slave sets. If the Master qualifier is used on a
tape drive that is not physically a master tape drive, the output
performance of the backup operation will decrease.
See the Oracle Rdb Guide to Database Maintenance for complete
details on the behavior of the master qualifier.
7.1.4.29 – Media Loader
Media_Loader
Nomedia_Loader
Use the Media_Loader qualifier to specify that the tape device
receiving the backup file has a loader or stacker. Use the
Nomedia_Loader qualifier to specify that the tape device does
not have a loader or stacker.
By default, if a tape device has a loader or stacker, RMU Backup
should recognize this fact. However, occasionally RMU Backup
does not recognize that a tape device has a loader or stacker.
Therefore, when the first backup tape fills, RMU Backup issues a
request to the operator for the next tape, instead of requesting
the next tape from the loader or stacker. Similarly, sometimes
RMU Backup behaves as though a tape device has a loader or
stacker when actually it does not.
If you find that RMU Backup is not recognizing that your tape
device has a loader or stacker, specify the Media_Loader
qualifier. If you find that RMU Backup expects a loader or
stacker when it should not, specify the Nomedia_Loader qualifier.
7.1.4.30 – No Read Only
No_Read_Only
Allows you to specify that you do not want any of the read-only
storage areas in your database to be backed up when you back up
the database.
If you do not specify the No_Read_Only qualifier, any read-only
storage area not specified with the Exclude qualifier will be
included in the backup file. The No_Read_Only qualifier allows
you to back up a database with many read-only storage areas
without having to type a long list of read-only storage area
names with the Exclude qualifier.
If you specify the No_Read_Only qualifier, read-only storage
areas are not backed up even if they are explicitly listed by the
Include qualifier.
There is no Read_Only qualifier.
7.1.4.31 – Record
Record
Norecord
The Record qualifier is set by default. Using the Norecord
qualifier allows you to avoid the modification of the database
with recent backup information. Hence the database appears as if
it had not been backed up at this time.
The main purpose of this qualifier is to allow a backup of a Hot
Standby database without modifying the database files.
The Norecord qualifier can be negated with the Record qualifier.
7.1.4.32 – Online
Online
Noonline
Specifying the Online qualifier permits users running active
transactions at the time the command is entered to continue
without interruption (unless the Noquiet_Point qualifier is also
specified).
Any subsequent transactions that start during the online backup
operation are permitted as long as the transactions do not
require exclusive access to the database, a table, or any index
structure currently being backed up.
To perform an online database backup operation, snapshots (either
immediate or deferred) must be enabled. You can use the Online
qualifier with the Incremental or Noincremental qualifiers.
If you use the default, the Noonline qualifier, users cannot be
attached to the database. If a user has invoked the database and
the RMU Backup command is entered with the Noonline qualifier (or
without the Online qualifier), an Oracle RMU error results. For
example:
%RMU-I-FILACCERR, error opening database root file DB_DISK:MF_PERSONNEL.RDB;1
-SYSTEM-W-ACCONFLICT, file access conflict
The offline backup process (specified with the Noonline
qualifier) has exclusive access to the database and does not
require snapshot (.snp) files in order to work. The snapshot
files can be disabled when the Noonline qualifier is used.
Oracle Corporation recommends that you close the database (with
the RMU Close command) when you perform the offline backup
operation on a large database. If the database was opened with
the SQL OPEN IS MANUAL statement, the RMU Backup command will
fail unless the RMU Close command is used. If the database was
opened with the SQL OPEN IS AUTOMATIC statement, the RMU Backup
command might fail if the activity level is high (that is, users
might access the database before the database is taken off line).
Issuing the RMU Close command can force the users out of the
database and give the RMU Backup command a chance to start;
however, although recommended, issuing the RMU Close command
is not required in this case.
Synonymous with the Owner qualifier. See the description of the
Owner qualifier.
7.1.4.33 – Owner
Owner=user-id
Specifies the owner of the tape volume set. The owner is the
user who will be permitted to restore the database. The user-id
parameter must be one of the following types of identifier:
o A user identification code (UIC) in [group-name,member-name]
alphanumeric format
o A user identification code (UIC) in [group-number,member-
number] numeric format
o A general identifier, such as SECRETARIES
o A system-defined identifier, such as DIALUP
The Owner qualifier cannot be used with a backup operation to
disk. When used with tapes, the Owner qualifier applies to all
continuation volumes. The Owner qualifier applies to the first
volume only if the Rewind qualifier is also specified.
If the Rewind qualifier is not specified, the backup operation
appends the file to a previously labeled tape, so the first
volume can have a protection different from the continuation
volumes.
See the Oracle Rdb Guide to Database Maintenance for information
on tape label processing.
7.1.4.34 – Page Buffers
Page_Buffers=number-buffers
Specifies the number of disk buffers assigned to each storage
area thread.
The range is 2 to 5 with a default of 2.
The higher values speed up scans for changed pages during an
incremental backup operation, but they exact a cost in memory
usage and larger working set requirements.
7.1.4.35 – Parallel
Parallel=(Executor_Count=n[,options])
Specifies that you want to perform a parallel backup operation.
When you issue an RMU Backup command with the parallel qualifier,
RMU Backup generates a plan file. This plan file describes how
the parallel backup operation should be executed. If you specify
the Noexecute qualifier, the plan file is generated, but not
executed. If you specify the Execute qualifier (or accept the
default), the plan file is executed immediately after RMU Backup
creates it.
The Executor_Count specifies the number of worker executors you
want to use for the parallel backup operation. The number of
worker executors must be equal to or less than the number of tape
drives you intend to use. If you specify Executor_Count=1, the
result is a non-parallel backup operation that is executed using
the parallel backup procedure, including creation of the plan
file and a dbserver process.
You can specify one, both, or none of the following options:
o Node=(node-list)
The Node=(node-list) option specifies the names of the nodes
in the cluster where the worker executors are to run. If more
than one node is specified, all nodes must be in the same
cluster and the database must be accessible from all nodes in
the cluster.
In addition, for a backup operation across nodes in a cluster
to be successful, whoever starts SQL/Services must have
proxy access among all nodes involved in the backup operation
(assuming you are using DECnet). For example, if you specify
the Nodes=(NODE1, NODE2, NODE3) as an option to the Parallel
qualifier, whomever started SQL/Services must have access
from NODE1 to NODE2, NODE1 to NODE3, NODE2 to NODE1, NODE2 to
NODE3, NODE3 to NODE1, and NODE3 to NODE2.
Separate node names in the node-list with commas. If you do
not specify the Nodes option, all worker executors run on the
node from which the parallel backup plan file is executed.
o Server_Transport=(DECnet|TCP)
To execute a parallel backup operation, SQL/Services must
be installed on your system. By default, the RMU Backup
command uses DECnet to access SQL/Services; if DECnet is
not available, RMU Backup tries to use TCP/IP. Use the
Server_Transport option to set the default behavior such
that RMU Backup tries TCP/IP first. You can also use the
SQL_NETWORK_TRANSPORT_TYPE configuration parameter to modify
the default behavior. See the Oracle Rdb Installation and
Configuration Guide for details on setting the SQL_NETWORK_
TRANSPORT_TYPE configuration parameter.
o Statistics
Specifies that you want RMU Backup to gather statistics
on the parallel backup operation for use with the Parallel
Backup Monitor. You must invoke the Parallel Backup Monitor, a
Windowing interface, to view these statistics.
To execute a parallel backup operation, SQL/Services must be
installed on your system. By default, the RMU Backup command
uses DECnet to access SQL/Services; if DECnet is not available,
RMU Backup tries to use TCP/IP. You can use the SQL_NETWORK_
TRANSPORT_TYPE configuration parameter to set the default
behavior such that RMU Backup tries TCP/IP first. See the Oracle
Rdb Installation and Configuration Guide for details on setting
the SQL_NETWORK_TRANSPORT_TYPE configuration parameter.
Note that during a parallel backup operation, all tape requests
are sent to the Operator; the parallel backup operation does not
send tape requests to the user who issues the Backup command.
Therefore, you should issue the DCL REPLY/ENABLE=TAPES command
from the terminal that serves the operator before issuing the RMU
Backup command.
7.1.4.36 – Prompt
Prompt=Automatic
Prompt=Operator
Prompt=Client
Specifies where server prompts are to be sent. When you specify
Prompt=Automatic, prompts are sent to the standard input device,
and when you specify Prompt=Operator, prompts are sent to the
server console. When you specify Prompt=Client, prompts are sent
to the client system.
7.1.4.37 – Protection
Protection[=file-protection]
Specifies the system file protection for the backup file produced
by the RMU Backup command.
The default file protection varies, depending on whether you
backup the file to disk or tape. This is because tapes do not
allow delete or execute access and the SYSTEM account always
has both read and write access to tapes. In addition, a more
restrictive class accumulates the access rights of the less
restrictive classes.
If you do not specify the Protection qualifier, the default
protection is as follows:
o S:RWED,O:RE,G,W if the backup is to disk
o S:RW,O:R,G,W if the backup is to tape
If you specify the Protection qualifier explicitly, the
differences in protection applied for backups to tape or disk
as noted in the preceding paragraph are applied. Thus, if you
specify Protection=(S,O,G:W,W:R), that protection on tape becomes
(S:RW,O:RW,G:RW,W:R).
7.1.4.38 – Quiet Point
Quiet_Point
Noquiet_Point
Allows you to specify that the database backup operation is to
occur either immediately or when a quiet point for database
activity occurs. A quiet point is defined as a point where no
active update transactions are in progress in the database.
Therefore, this qualifier is used with the Online qualifier.
When you specify the Noquiet_Point qualifier, RMU Backup proceeds
with the backup operation as soon as the RMU Backup command is
issued, regardless of any update transaction activity in progress
in the database. Because RMU Backup must acquire concurrent-
read locks on all physical and logical areas, the backup
operation will fail if there are any active transactions with
exclusive locks on a storage area. However, once RMU Backup has
successfully acquired all concurrent-read storage area locks it
should not encounter any further lock conflicts. If a transaction
that causes Oracle Rdb to request exclusive locks is started
while the backup operation is proceeding, that transaction will
either wait or receive a lock conflict error, but the RMU Backup
command will continue unaffected.
See the Usage_Notes help entry under this command for
recommendations on using the Quiet_Point and Noquiet_Point
qualifiers.
The default is the Quiet_Point qualifier.
7.1.4.39 – Reader Thread Ratio
Reader_Thread_Ratio=integer
This qualifier has been deprecated. Use the /Threads qualifier
instead.
7.1.4.40 – Restore Options
Restore_Options=file-name
Generates an options file designed to be used with the Options
qualifier of the RMU Restore command. If you specify a full
backup operation, all the storage areas will be represented in
the options file. If you specify a by-area backup operation, only
those areas included in the backup will be represented in the
options file.
The Restore_Options file is created at the end of the backup
operation.
By default, a Restore_Options file is not created. If you
specify the Restore_Options qualifier and a file, but not a file
extension, RMU Backup uses an extension of .opt by default.
7.1.4.41 – Rewind
Rewind
Norewind
Specifies that the magnetic tape that contains the backup file
will be rewound before processing begins. The tape will be
initialized according to the Label and Density qualifiers. The
Norewind qualifier is the default and causes the backup file to
be created starting at the current logical end-of-tape (EOT).
The Rewind and Norewind qualifiers are applicable only to tape
devices. RMU Backup returns an error message if these qualifiers
are used and the target device is not a tape device.
7.1.4.42 – Scan Optimization
Scan_Optimization
Noscan_Optimization
Specifies whether or not RMU Backup should employ scan
optimizations during incremental backup operations.
By default, RMU Backup optimizes incremental backup operations
by scanning regions of the database that have been updated since
the last full backup operation. The identity of these regions
is stored in the database. Only these regions need to be scanned
for updates during an incremental backup operation. This provides
a substantial performance improvement when database activity is
sufficiently low.
However, there is a cost in recording this information in the
database. In some circumstances the cost might be too high,
particularly if you do not intend to use incremental backup
operations.
The Scan_Optimization qualifier has different effects, depending
on the type of backup operation you perform. In brief, you can
enable or disable the scan optimization setting only when you
issue a full offline backup command, and you can specify whether
to use the data produced by a scan optimization only when you
issue an incremental backup command. The following list describes
this behavior in more detail:
o During an offline full backup operation, you can enable or
disable the scan optimization setting.
Specify the Scan_Optimization qualifier to enable recording
of the identities of areas that change after this backup
operation completes.
Specify the Noscan_Optimization qualifier to disable recording
of the identities of areas that change after this backup
operation completes.
By default, the recording state remains unchanged (from the
state it was in prior to execution of the Backup command)
during a full backup operation.
Note that specifying the Scan_Optimization or Noscan_
Optimization qualifier with an offline full backup operation
has no effect on the backup operation itself, it merely allows
you to change the recording state for scan optimization.
o During an online full backup operation, the qualifier is
ignored.
The recording state for scan optimization remains unchanged
(from the state it was in prior to execution of the Backup
command). If you execute an online full backup operation
and specify the Scan_Optimization or Noscan_Optimization
qualifier, RMU Backup returns an informational message to
indicate that the qualifier is being ignored.
o During an incremental backup operation, the qualifier directs
whether the scan optimization data (if recorded previously)
will be used during the operation.
If you specify the Scan_Optimization qualifier, RMU Backup
uses the optimization if Oracle Rdb has been recording the
regions updated since the last full backup operation.
If you specify the Noscan_Optimization qualifier, RMU Backup
does not use the optimization, regardless of whether Oracle
Rdb has been recording the identity of the regions updated
since the last full backup operation.
You cannot enable or disable the setting for scan
optimizations during an incremental backup operation.
By default, the Scan_Optimization qualifier is used during
incremental backup operations.
7.1.4.43 – Tape Expiration
Tape_Expiration=date-time
Specifies the expiration date of the backup (.rbf) file. Note
that when RMU Backup reads a tape, it looks at the expiration
date in the file header of the first file on the tape and assumes
the date it finds in that file header is the expiration date for
the entire tape. Therefore, if you are backing up an .rbf file to
tape, specifying the Tape_Expiration qualifier only has meaning
if the .rbf is the first file on the tape. You can guarantee that
the .rbf file will be the first file on the tape by specifying
the Rewind qualifier and overwriting any existing files on the
tape.
When the first file on the tape contains an expiration date
in the file header, you cannot overwrite the tape before the
expiration date unless you have the OpenVMS SYSPRV or BYPASS
privilege.
Similarly, when you attempt to restore a .rbf file from tape,
you cannot perform the restore operation after the expiration
date recorded in the first file on the tape unless you have the
OpenVMS SYSPRV or BYPASS privilege
By default, no expiration date is written to the .rbf file
header. In this case, if the .rbf file is the first file on the
tape, the tape can be overwritten immediately. If the .rbf file
is not the first file on the tape, the ability to overwrite the
tape is determined by the expiration date in the file header of
the first file on the tape.
You cannot explicitly set a tape expiration date for an entire
volume. The volume expiration date is always determined by the
expiration date of the first file on the tape.
The Tape_Expiration qualifier cannot be used with a backup file
written to disk.
See the Oracle Rdb Guide to Database Maintenance for information
on tape label processing.
7.1.4.44 – Threads=number
Threads=number
Specifies the number of reader threads to be used by the backup
process.
RMU creates so called internal 'threads' of execution to read
data from one specific storage area. Threads run quasi-parallel
within the process executing the RMU image. Each thread generates
its own I/O load and consumes resources like virtual address
space and process quotas (e.g. FILLM, BYTLM). The more threads,
the more I/Os can be generated at one point in time and the more
resources are needed to accomplish the same task.
Performance increases with more threads due to parallel
activities which keeps disk drives busier. However, at a certain
number of threads, performance suffers because the disk I/O
subsystem is saturated and I/O queues build up for the disk
drives. Also the extra CPU time for additional thread scheduling
overhead reduces the overall performance. Typically 2-5 threads
per input disk drive are sufficient to drive the disk I/O
susbsystem at its optimum. However, some controllers may be
able to handle the I/O load of more threads, for example disk
controllers with RAID sets and extra cache memory.
In a backup operation, one writer thread is created per output
stream. An output stream can be either a tape drive, a disk file
or, a media library manager stream. In addition, RMU creates
a number of reader threads and their number can be specified.
RMU assigns a subset of reader threads to writer threads. RMU
calculates the assignment so that roughly the same amount of
data is assigned to each output stream. By default, five reader
threads are created for each writer thread. If the user has
specified the number of threads, then this number is used to
create the reader thread pool. RMU always limits the number of
reader threads to the number of storage areas. A threads number
of 0 causes RMU to create one thread per storage area which start
to run all in parallel immediately. Even though this may sound
like a good idea to improve performance, this approach suffers
performance for databases with a larger number (>10) of storage
areas. For a very large number of storage areas (>800), this
fails due to hard limitations in system resources like virtual
address space.
For a backup operation, the smallest threads number you can
specify is the number of output streams. This guarantees that
each writer thread has at least one reader thread assigned to it
and does not produce an empty save set. Using a threads number
equal to the number of output streams generates the smallest
system load in terms of working set usage and disk I/O load.
Disk I/O subsystems most likely can handle higher I/O loads.
Using a slightly larger value than the number of output streams
(for example, assigning more reader threads to a writer thread)
typically results in faster execution time.
7.1.5 – Usage Notes
o To use the RMU Backup command for a database, you must have
the RMU$BACKUP privilege in the root file access control
list (ACL) for the database or the OpenVMS SYSPRV or BYPASS
privilege.
o If you attempt to back up an area with detected corruptions
(or which has corrupt pages logged to the CPT), the backup
operation fails immediately. If you attempt to back up an area
that contains an undetected corruptions (a corruption that
has not been logged to the CPT), the backup operation proceeds
until a corruption is found. These undetected corruptions
are found only if you specify the Checksum qualifier with the
Backup command.
o The following list provides usage information for parallel
backup operations:
- When performing a parallel backup operation, do not
allocate or mount any tapes manually; this is done
automatically by RMU Backup.
- You can monitor the progress of a backup operation to tape
on your Windows system using the Parallel Backup Monitor.
- You can use the Parallel Backup Monitor to monitor the
progress of a parallel backup operation to tape. Specify
your backup operation using the Parallel qualifier
with the Executor_Count=1 option to approximate a non-
parallel backup operation. Non-parallel backup operations
(backup commands without the Parallel qualifier) cannot be
monitored with the Parallel Backup Monitor.
- If a parallel backup operation is issued from a server
node, then RMU Backup communicates with SQL/Services to
start the Coordinator. SQL/Services creates a Coordinator
process.
- If a parallel backup operation is issued from a client node
(for example, using RMUwin), then the same SQL/Services
process that is created to execute client/server RMU Backup
commands is used as the Coordinator process.
- You cannot use the Storage Library System (SLS) for OpenVMS
with an RMU parallel backup.
o Logical area threshold information for storage areas with
uniform page format is recorded in the backup file. See
the Oracle Rdb SQL Reference Manual for more information on
logical area threshold information.
o See the Oracle Rdb Guide to Database Maintenance for
information on determining the working set requirements for
a non-parallel backup operation.
o The following list provides usage information for the Quiet_
Point and Noquiet_Point qualifiers
- If the operation stalls when you attempt a quiet-point
Oracle RMU backup operation, it may be because another user
is holding the quiet-point lock. In some cases, there is
no way to avoid this stall. In other cases you may find
the stall is caused by a user who has previously issued
and completed a read/write transaction, and is currently
running a read-only transaction. When this user started
the read/write transaction, the process acquired the quiet-
point lock. Ordinarily, such a process retains this lock
until it detaches from the database.
You can set the RDM$BIND_SNAP_QUIET_POINT logical name to
control whether or not such a process retains the quiet-
point lock. Set the value of the logical name to "1" so
that all transactions hold the quiet point lock until a
backup process requests it. Read-only transactions will not
obtain the quiet point lock; only read/write transactions
will obtain the quiet point lock. Set the value of the
logical name to "0" so that read-only transactions always
release the quiet point lock at the beginning of the
transaction, regardless of the existence of a backup
process. All modified buffers in the buffer pool have
to be written to disk before the transaction proceeds.
Applications that utilize the fast commit feature and often
switch between read-only and read/write transactions within
a single attach may experience performance degradation if
the logical is defined to "0".
Oracle recommends that you do not define the RDB$BIND_SNAP_
QUIET_POINT logical for most applications.
- If you intend to use the Noquiet_Point qualifier with a
backup procedure that previously specified the Quiet_
Point qualifier (or did not specify either the Quiet_
Point or Noquiet_Point qualifier), you should examine any
applications that execute concurrently with the backup
operation. You might need to modify your applications or
your backup procedure to handle the lock conflicts that
might occur when you specify Noquiet_Point.
When you specify the Quiet_Point qualifier, the backup
operation begins when a quiet point is reached. Other
update transactions that are started after the database
backup operation begins are prevented from executing until
after the root file for the database has been backed up
(the backup operation on the database storage areas begins
after the root file is backed up).
- When devising your backup strategy for both the database
and the after-image journal files, keep in mind the trade-
offs between performing quiet-point backup operations and
noquiet-point backup operations. A noquiet-point backup
operation is quicker than a quiet-point backup operation,
but usually results in a longer recovery operation. Because
transactions can span .aij files when you perform noquiet-
point .aij backup operations, you might have to apply
numerous .aij files to recover the database. In a worst-
case scenario, this could extend back to your last quiet-
point .aij or database backup operation. If you rarely
perform quiet-point backup operations, recovery time could
be excessive.
One method you can use to balance these trade-offs is
to perform regularly scheduled quiet-point .aij backup
operations followed by noquiet-point database backup
operations. (You could do the converse, but a quiet-
point backup of the .aij file improves the performance
of the recovery operation should such an operation become
necessary.) Periodically performing a quiet-point .aij
backup operation helps to ensure that your recovery time
will not be excessive.
o Do not add new logical areas in the context of an exclusive
transaction during an online backup operation.
When new logical areas are added during an online backup
operation such that new records are physically placed in a
location that the backup operation has not processed yet,
Oracle Rdb returns the following error:
%RMU-F-CANTREADDBS, error reading pages !UL:!UL-!UL
Logical areas that cause this problem are created when you do
either of the following:
- Create a new table, start a transaction that reserves the
new table in exclusive mode, and load the table with rows.
- Create a new table, start a transaction that reserves the
new table in exclusive mode, and create an index for the
table.
Creating new tables and populating them, or creating new
indexes do not pose a problem if the table is not reserved
in exclusive mode.
o If you back up a database without its root file ACL (using
the Noacl qualifier of the RMU Backup command, for example), a
user who wants to restore the database must have the OpenVMS
SYSPRV or BYPASS privilege.
o You might receive the RMU-I-WAITOFF informational message
when you try to back up your database if the database was
manually opened with the RMU Open command and has not been
manually closed with the RMU Close command. You also receive
this message when you issue an RMU Close command with the
Nowait qualifier and users are still attached to the database.
To back up your database, you must have exclusive access to
the database root file. This error message usually indicates
that you do not have exclusive access to the database root
file because the operating system still has access to it. If
your database was manually opened with the RMU Open command,
you should be able to gain exclusive access to the database
root file by manually closing the database with an RMU Close
command.
You can also receive this error message when you attempt other
operations for which you must have exclusive access to the
database root file. The solution in those cases is to attempt
the operation again, later. Until you have exclusive access
to the database root file, meaning that no other user gained
access to the database between the time you issued the command
and the time the command takes effect, you cannot complete
those operations.
o Backup files are typically smaller in size than the actual
database. They exclude free space and redundant structural
information that can be reconstructed with a restore
operation. However, backup files also contain some overhead
to support the backup format. Compression factors range from
approximately 1.2 to 3 depending on the organization and
fullness of the database. The compression factor achieved
for a given database is generally quite stable and usually
only changes with structural or logical reorganization.
Do not use the size of the backup file as an indication of
the size of the database files. Use the RMU Analyze command to
determine the actual data content.
o Backup performance is strongly affected by the job priority
of the process running it. For best performance, a backup
operation should execute at interactive priority, even when it
is operating as a batch job.
o The following list contains information of interest if you are
performing a backup operation to tape:
- If you back up the database to tape, and you do not specify
the Parallel qualifier, you must mount the backup media by
using the DCL MOUNT command before you issue the RMU Backup
command. The tape must be mounted as a FOREIGN volume.
Like the OpenVMS Backup utility (BACKUP), the RMU Backup
command performs its own tape label processing. This does
not prohibit backing up an Oracle Rdb database to an RMS
file on a Files-11 disk.
When you specify the Parallel qualifier, you need not mount
the backup media because the parallel executors allocate
and mount the drive and labels for you.
- When RMU Backup creates a multivolume backup file, you can
only append data to the end of the last volume. You cannot
append data to the end of the first or any intermediate
volumes.
- The RMU Backup command uses asynchronous I/O. Tape
support provided includes support for multifile volumes,
multivolume files, and multithreaded concurrent tape
processing.
- If you allow RMU Backup to implicitly label tapes and you
are using a tape drive that has a display (for example, a
TA91 tape drive), the label displayed is the original label
on the tape, not the label generated by RMU Backup.
- Oracle Corporation recommends that you supply a name for
the backup file that is 17 or fewer characters in length.
File names longer than 17 characters can be truncated.
The system supports four file-header labels: HDR1, HDR2,
HDR3, and HDR4. In HDR1 labels, the file identifier field
contains the first 17 characters of the file name you
supply. The remainder of the file name is written into the
HDR4 label, provided that this label is allowed. If no HDR4
label is supported, a file name longer than 17 characters
will be truncated.
The following Oracle RMU commands are valid. The
terminating period for the backup file name is not counted
as a character, and the default file type of .rbf is
assumed. Therefore, the system interprets the file name
as wednesdays_backup, which is 17 characters in length:
$ RMU/BACKUP/REWIND/LABEL=TAPE MF_PERSONNEL MUA0:WEDNESDAYS_BACKUP.
$ RMU/RESTORE/REWIND/LABEL=TAPE MUA0:WEDNESDAYS_BACKUP.
The following Oracle RMU commands create a backup file
that cannot be restored. Because no terminating period is
supplied, the system supplies a period and a file type of
.rbf, and interprets the backup file name as wednesdays_
backup.rbf, which is 20 characters in length. RMU truncates
the backup file name to wednesdays_backup. When you attempt
to restore the backed up file, RMU assumes the default
extension of .rbf and returns an error when it cannot find
the file wednesdays_backup.rbf on tape.
$ RMU/BACKUP/REWIND/LABEL=TAPE MF_PERSONNEL MUA0:WEDNESDAYS_BACKUP
$ RMU/RESTORE/REWIND/LABEL=TAPE MUA0:WEDNESDAYS_BACKUP
- See the Oracle Rdb Guide to Database Maintenance for
information on the steps RMU Backup follows in tape label
checking for the RMU Backup command.
- The RMU Backup command works correctly with unlabeled or
nonstandard formatted tapes when the Rewind qualifier is
specified. However, tapes that have never been used or
initialized, and nonstandard tapes sometimes produce errors
that make OpenVMS mount attempts fail repeatedly. In this
situation, RMU Backup cannot continue until you use the DCL
INITIALIZE command to correct the error.
- How Tapes are Relabeled During a Backup Operation
summarizes the tape labeling behavior of RMU Backup under
a variety of circumstances. For example, the last row
of the table describes what labels are applied when you
specify both the Label=back qualifier and the Accept_Label
qualifier and all the tapes (except the second) are already
labeled and used in the following order: aaaa, no label,
bbbb, dddd, cccc. The table shows that these tapes will
be relabeled in the following order, with no operator
notification occurring: aaaa, back02, bbbb, dddd, eeee.
How Tapes are Relabeled During a Backup Operation assumes
the backup file name is mf_personnel.rbf:
Table 5 How Tapes are Relabeled During a Backup Operation
Qualifiers Current Resulting
Specified Labels Labels Operator Notification
Neither None
Label mf_ mf_
nor per per
Accept_ mf_ mf_
Label p05 p05
mf_ mf_
p06 p06
mf_ mf_
p02 p02
mf_ mf_
p03 p03
Neither All tapes except second tape
Label aaaa mf_
nor no per
Accept_ label mf_
Label bbbb p02
dddd mf_
cccc p03
mf_
p04
mf_
p05
Label=back All tapes except second
aaaa back
no back02
label back03
bbbb back04
dddd back05
cccc
Label=(back01, All tapes except second
back02) aaaa back01
no back02
label back03
bbbb back04
dddd back05
cccc
Accept_ None
Label aaaa aaaa
no mf_
label p02
bbbb bbbb
dddd dddd
cccc cccc
Accept_
Label, None
Label=back aaaa aaaa
no back02
label bbbb
bbbb dddd
dddd cccc
cccc
o When you use more than one tape drive for a backup operation,
ensure that all of the tape drives are the same type (for
example, all of the tape drives must be TA90s or TZ87s or
TK50s). Using different tape drive types (for example, one
TK50 and one TA90) for a single database backup operation may
make database restoration difficult or impossible.
Oracle RMU attempts to prevent you from using different tape
drive densities during a backup operation but is not able to
detect all invalid cases and expects that all tape drives for
a backup are of the same type.
As long as all of the tapes used during a backup operation
can be read by the same type of tape drive during a restore
operation, the backup is likely to be valid. This may be the
case, for example, when you use a TA90 and a TA90E.
Oracle Corporation recommends that, on a regular basis, you
test your backup and recovery procedures and environment
using a test system. You should restore the database and then
recover using after-image journals (AIJs) to simulate failure
recovery of the production system.
Consult the Oracle Rdb Guide to Database Maintenance and
the Oracle Rdb Guide to Database Design and Definition for
additional information about Oracle Rdb backup and restore
operations.
o You should use the density values added in OpenVMS Version
7.2-1 for OpenVMS tape device drivers that accept them because
previously supported values may not work as expected. If
previously supported values are specified for drivers that
support the OpenVMS Version 7.2-1 density values, the older
values are translated to the Version 7.2-1 density values if
possible. If the value cannot be translated, a warning message
is generated, and the specified value is used.
If you use density values added in OpenVMS Version 7.2-1 for
tape device drivers that do not support them, the values are
translated to acceptable values if possible. If the value
cannot be translated, a warning message is generated and the
density value is translated to the existing default internal
density value (MT$K_DEFAULT).
One of the following density-related errors is generated if
there is a mismatch between the specified density value and
the values that the tape device driver accepts:
%RMU-E-DENSITY, TAPE_DEVICE:[000000]DATABASE.BCK; does not support specified
density
%RMU-E-POSITERR, error positioning TAPE_DEVICE:
%RMU-E-BADDENSITY, The specified tape density is invalid for this device
o If you want to use an unsupported density value, use the VMS
INITIALIZE and MOUNT commands to set the tape density. Do not
use the Density qualifier.
o The density syntax used on the command can also be used in the
plan file for the Parallel RMU backup to tape process.
o Oracle Rdb cannot continue a single .rda file across multiple
disks. This means that, during a multidisk backup operation,
each device must have enough free space to hold the largest
storage area in the database. If the storage areas are on
stripe sets and are larger than any actual single disk, then
the devices specified for the backup file must be striped
also.
It is not possible to indicate which storage area should be
backed up to a given device.
o Because data stream names representing the database are
generated based on the backup file name specified for the
Oracle RMU backup command, you must either use a different
backup file name to store the next backup of the database
to the Librarian utility or first delete the existing data
streams generated from the backup file name before the same
backup file name can be reused.
To delete the existing data streams stored in the Librarian
utility, you can use a Librarian management utility or the
Oracle RMU Librarian/Remove command.
o If you are backing up to multiple disk devices using thread
pools, the following algorithm to assign threads is used by
the backup operation:
- The size of each area is calculated as the product of the
page length in bytes multiplied by the highest page number
used (maximum page number) for that area.
- The area sizes are sorted by descending size and ascending
device name. For internal processing reasons, the system
area is placed as the first area in the first thread.
- Each of the remaining areas is added to the thread that has
the lowest byte count.
The same algorithm is used for tape devices, but the areas are
partitioned among writer threads, not disk devices.
o The partitioning for backup to multiple disk devices is done
by disk device, not by output thread, because there will
typically be more disk devices than output threads, and an
area cannot span a device.
7.1.6 – Examples
Example 1
The following command performs a full backup operation on the mf_
personnel database and displays a log of the session:
$ RMU/BACKUP MF_PERSONNEL -
_$ DISK2[USER1]MF_PERS_FULL_BU.RBF /LOG
Example 2
To perform an incremental backup operation, include the
Incremental qualifier. Assume a full backup operation was done
late Monday night. The following command performs an incremental
backup operation on the database updates only for the following
day:
$ RMU/BACKUP/INCREMENTAL MF_PERSONNEL.RDB -
_$ $222$DUA20:[BCK]TUESDAY_PERS_BKUP/LOG
Example 3
To back up the database while there are active users, specify the
Online qualifier:
$ RMU/BACKUP/ONLINE MF_PERSONNEL.RDB -
_$ $222$DUA20:[BACKUPS]PERS_BU.RBF /LOG
Example 4
The following RMU Backup command includes only the EMPIDS_
LOW and EMPIDS_MID storage areas in the backup file of the
mf_personnel database. All the other storage areas in the mf_
personnel database are excluded from the backup file:
$ RMU/BACKUP/INCLUDE=(EMPIDS_LOW,EMPIDS_MID) -
_$ MF_PERSONNEL.RDB $222$DUA20:[BACKUPS]MF_PERS_BU.RBF
Example 5
The following command backs up the mf_personnel database but not
the root file ACL for the database:
$ RMU/BACKUP/NOACL MF_PERSONNEL MF_PERS_NOACL
Example 6
The following command backs up the mf_personnel database without
waiting for a quiet point in the database:
$ RMU/BACKUP/NOQUIET_POINT MF_PERSONNEL MF_PERS_NQP
Example 7
The following command creates a journal file, pers_journal.jnl,
and a backup file, pers_backup.rbf.
$ RMU/BACKUP/JOURNAL=PERS_JOURNAL MF_PERSONNEL PERS_BACKUP
Example 8
The following example backs up all the storage areas in the mf_
personnel database except for the read-only storage areas.
$ RMU/BACKUP/NO_READ_ONLY MF_PERSONNEL MF_PERSONNEL_BU
Example 9
The following example assumes that you are using multiple tape
drives to do a large backup operation. By specifying the Loader_
Synchronization qualifier, this command does not require you to
load tapes as each becomes full. Instead, you can load tapes on a
loader or stacker and RMU Backup will wait until all concurrent
tape operations have concluded for one set of tape volumes before
assigning the next set of tape volumes.
Using this example, you:
1. Verify the database.
2. Allocate each tape drive.
3. Manually place tapes BACK01 and BACK05 on the $111$MUA0:
drive.
4. Manually place tapes BACK02 and BACK06 on the $222$MUA1:
drive.
5. Manually place tapes BACK03 and BACK07 on the $333$MUA2:
drive.
6. Manually place tapes BACK04 and BACK08 on the $444$MUA3:
drive.
7. Mount the first volume.
8. Perform the backup operation.
9. Dismount the last tape mounted. (This example assumes it is on
the $444$MUA3: drive.)
10. Deallocate each tape drive.
$ RMU/VERIFY DB_DISK:[DATABASE]MF_PERSONNEL.RDB
$
$ ALLOCATE $111$MUA0:
$ ALLOCATE $222$MUA1:
$ ALLOCATE $333$MUA2:
$ ALLOCATE $444$MUA3:
$
$ MOUNT/FOREIGN $111$MUA0:
$
$ RMU/BACKUP /LOG/REWIND/LOADER_SYNCHRONIZATION -
_$ /LABEL=(BACK01, BACK02, BACK03, BACK04, BACK05, -
_$ BACK06, BACK07, BACK08) -
_$ DB_DISK:[MFPERS]MF_PERSONNEL.RDB -
_$ $111$MUA0:PERS_FULL_MAR30.RBF/Master, $222$MUA1: -
_$ $333$MUA1:/MASTER, $444$MUA3
$
$ DISMOUNT $444$MUA3:
$
$ DEALLOCATE $111$MUA0:
$ DEALLOCATE $222$MUA1:
$ DEALLOCATE $333$MUA2:
$ DEALLOCATE $444$MUA4:
Example 10
The following example generates a parallel backup plan file, but
does not execute it. The result is a backup plan file. See the
next example for a description of the plan file.
$ RMU/BACKUP/PARALLEL=(EXEC=4, NODE=(NODE1, NODE2)) -
_$ /LIST_PLAN=(PARTIAL.PLAN)/NOEXECUTE/INCLUDE=(RDB$SYSTEM, EMPIDS_LOW, -
_$ EMPIDS_MID, EMPIDS_OVER, SALARY_HISTORY, EMP_INFO) -
_$ /LABEL=(001, 002, 003, 004, 005, 006, 007, 008, 009) -
_$ /CHECKSUM_VERIFICATION -
_$ MF_PERSONNEL TAPE1:MF_PARTIAL.RBF, TAPE2:, TAPE3:, TAPE4:
Example 11
The following display shows the contents of the plan file,
PARTIAL.PLAN created in the preceding example. The following
callouts are keyed to this display:
1 The Plan Parameters include all the parameters specified
on the RMU BACKUP command line and all possible command
qualifiers.
2 Command qualifiers that are not specified on the command line
are represented as comments in the plan file. This allows you
to edit and adjust the plan file for future use.
3 Command qualifiers that are explicitly specified on the
command line are represented in the plan file as specified.
4 Executor parameters are listed for each executor involved in
the backup operation.
! Plan created on 28-JUN-1996 by RMU/BACKUP.
Plan Name = PARTIAL
Plan Type = BACKUP
Plan Parameters: 1
Database Root File = DISK1:[DB]MF_PERSONNEL;1
Backup File = PARTIAL.RBF
! Journal = specification for journal file 2
! Tape_Expiration = dd-mmm-yyyy
! Active_IO = number of buffers for each tape
! Protection = file system protection for backup file
! Block_Size = bytes per tape block
! Density = tape density
![No]Group_Size = number of blocks between XOR blocks
! Lock_Timeout = number of second to wait for locks
! Owner = identifier of owner of the backup file
!Page_Buffers = number of buffers to use for each storage area
Checksum_Verification 3
CRC = AUTODIN_II
NoIncremental
! Accept_labels preserves all tape labels
Log
! Loader_synchronization labels tapes in order across drives
! Media_loader forces support of a tape media loader
NoOnline
Quiet_Point
NoRewind
Statistics
ACL
![No]Scan_Optimization
Labels = (-
001 -
002 -
003 -
004 -
005 -
006 -
007 -
008 -
009 )
End Plan Parameters
Executor Parameters :
Executor Name = COORDINATOR
Executor Type = Coordinator
End Executor Parameters
Executor Parameters : 4
Executor Name = WORKER_001
Executor Type = Worker
Executor Node = NODE1
Start Storage Area List
EMPIDS_LOW,
SALARY_HISTORY
End Storage Area List
Tape Drive List
Tape Drive = TAPE1:
MASTER
End Tape Drive List
End Executor Parameters
Executor Parameters :
Executor Name = WORKER_002
Executor Type = Worker
Executor Node = NODE2
Start Storage Area List
EMPIDS_MID,
RDB$SYSTEM
End Storage Area List
Tape Drive List
Tape Drive = TAPE2:
MASTER
End Tape Drive List
End Executor Parameters
Executor Parameters :
Executor Name = WORKER_003
Executor Type = Worker
Executor Node = NODE1
Start Storage Area List
EMPIDS_OVER
End Storage Area List
Tape Drive List
Tape Drive = TAPE3
MASTER
End Tape Drive List
End Executor Parameters
Executor Parameters :
Executor Name = WORKER_004
Executor Type = Worker
Executor Node = NODE2
Start Storage Area List
EMP_INFO
End Storage Area List
Tape Drive List
Tape Drive = TAPE4
MASTER
End Tape Drive List
End Executor Parameters
Example 12
The following example demonstrates the use of the Restore_Options
qualifier. The first command backs up selected areas of the
mf_personnel database and creates an options file. The second
command shows the contents of the options file. The last command
demonstrates the use of the options file with the RMU Restore
command.
$ RMU/BACKUP MF_PERSONNEL.RDB MF_EMPIDS.RBF/INCLUDE=(EMPIDS_LOW, -
_$ EMPIDS_MID, EMPIDS_OVER) /RESTORE_OPTIONS=MF_EMPIDS.OPT
%RMU-I-NOTALLARE, Not all areas will be included in this backup file
$ !
$ !
$ TYPE MF_EMPIDS.OPT
! Options file for database USER1:[MFDB]MF_PERSONNEL.RDB;1
! Created 18-JUL-1995 10:31:08.82
! Created by BACKUP command
EMPIDS_LOW -
/file=USER2:[STOA]EMPIDS_LOW.RDA;1 -
/blocks_per_page=2 -
/extension=ENABLED -
/read_write -
/spams -
/thresholds=(70,85,95) -
/snapshot=(allocation=100, -
file=USER2:[SNP]EMPIDS_LOW.SNP;1)
EMPIDS_MID -
/file=USER3:[STOA]EMPIDS_MID.RDA;1 -
/blocks_per_page=2 -
/extension=ENABLED -
/read_write -
/spams -
/thresholds=(70,85,95) -
/snapshot=(allocation=100, -
file=USER3:[SNP]EMPIDS_MID.SNP;1)
EMPIDS_OVER -
/file=USER4:[STOA]EMPIDS_OVER.RDA;1 -
/blocks_per_page=2 -
/extension=ENABLED -
/read_write -
/spams -
/thresholds=(70,85,95) -
/snapshot=(allocation=100, -
file=USER4:[SNP]EMPIDS_OVER.SNP;1)
$ !
$ !
$ !
$ RMU/RESTORE MF_EMPIDS.RBF /AREA/OPTIONS=MF_EMPIDS.OPT
Example 13
The following example uses a density value with compression:
$RMU/BACKUP/DENSITY=(TK89,COMPACTION)/REWIND/LABEL=(LABEL1,LABEL2) -
_$ MF_PERSONNEL TAPE1:MFP.BCK, TAPE2:
Example 14
The following example shows how to perform a multidisk backup
operation.
$ RMU/BACKUP/DISK MF_PERSONNEL DEVICE1:[DIRECTORY1]MFP.RBF, -
_$ DEVICE2:[DIRECTORY2]
.
.
.
%RMU-I-COMPLETED, BACKUP operation completed at 1-MAY-2001 17:40:53.81
Example 15
The following example shows the use of the Librarian qualifier
with a plan file.
$RMU/BACKUP/PARALLEL=EXECUTOR=3/LIBRARIAN=WRITER_THREADS=3 -
_$ /LIST_PLAN=FILENAME.PLAN/NOEXECUTE/LOG DATABASE FILENAM.RBF
$RMU/BACKUP/PLAN FILENAME.PLAN
$RMU/RESTORE/LIBRARIAN=(READER_THREADS=9)/LOG FILENAME.RBF
The first backup command creates the plan file for a parallel
backup, but does not execute it. The second backup command
executes the parallel backup using the plan file. Three worker
processes are used; each process uses the three writer threads
specified with the Librarian qualifier. Each writer thread in
each process writes one stream of backup data to the Librarian
utility; a total of nine streams is written.
Example 16
This example shows the use of the Compression qualifier ZLIB.
$ RMU /BACKUP /COMPRESS=ZLIB:9 /LOG=FULL FOO BCK
.
.
.
BACKUP summary statistics:
Data compressed by 53% (9791 KB in/4650 KB out)
Example 17
The following example shows the use of the Norecord qualifier.
This would be used to backup a Hot Standby database without
modifying the database files.
$ RMU /BACKUP /NORECORD FOO BCK
7.2 – After Journal
Creates a backup file of the database after-image journal (.aij)
file or files.
Oracle Rdb supports two types of after-image journaling
mechanisms: one that employs a single, extensible .aij file and
another that employs multiple, fixed-size .aij files. The type of
journaling mechanism being used at the time the backup operation
starts can affect how you should specify the backup command.
Further information on how these two journaling mechanisms affect
the backup operation appears in the Description help entry under
this command.
The backup .aij file is an actual, usable .aij file that can
be applied to the appropriate Oracle Rdb database in a recovery
operation.
The RMU Backup After_Journal command can be used while users are
attached to the database.
7.2.1 – Description
The backup .aij file you create can be used with the RMU Recover
command to recover (roll forward) journaled transactions. In some
cases, you might have to issue additional Recover commands: one
for the backup .aij file and a second for the more recent .aij
files.
Oracle Rdb supports the following two types of .aij file
configurations:
o A configuration that uses a single, extensible .aij file
This is the method always used prior to Version 6.0 and is
also the default (for compatibility with versions of Oracle
Rdb prior to Version 6.0).
When an extensible .aij file is used, one .aij file is written
to and extended, as needed, by the number of blocks specified
when the .aij file was created. The .aij file continues to
be extended until it is backed up (or the device on which it
resides is full).
The RMU Backup After_Journal command copies transactions
recorded in the current .aij file (always on a disk device)
to the backup .aij file (which might be on a tape or disk
device). On completion, the current .aij file is truncated
and used again. During periods of high update activity, the
truncation of the active .aij file might not be performed
because of conflicting access to the .aij file by other users,
but the storage allocated to the active .aij file is still
used again when the backup operation completes.
o A configuration that uses two or more fixed-size .aij files
When fixed-size .aij files are used, the database maintains
multiple .aij files; however, only one .aij file is written to
at a time. This .aij file is considered the current journal.
When this .aij file is filled, a switchover occurs to allow
journaling to continue in another available .aij file.
The RMU Backup After_Journal command works as follows with
fixed-size .aij files:
- Backs up any full .aij files
The backup operation first backs up the .aij file with the
lowest AIJ sequence number (that needs backing up), the
operation continues to back up .aij files in ascending AIJ
sequence number. If a lot of .aij files need to be backed
up when the RMU Backup After_Journal command is issued,
one backup file might contain the contents of all the .aij
files being backed up.
- Backs up the current .aij file
Even if there are active transactions at the time of the
backup operation, the RMU Backup After_Journal command
will start to backup the current active .aij file. If
you have specified the Quiet_Point qualifier, the backup
operation stalls at some point waiting for all the current
transactions to complete.
- Switches to the next available .aij file
An available .aij file is one for which both of the
following are true:
* It is not currently being used to record transactions.
* It is not needed for a redo operation.
Such an .aij file might be one that has never been used, or
one that has already been backed up.
Once a specified .aij file has been completely backed up, it
is initialized and marked as available for reuse.
NOTE
The method employed, fixed-size .aij files or an extensible
.aij file, cannot be set explicitly by the user. Any event
that reduces the number of .aij files to one results in an
extensible .aij file being used. Any event that increases
the number .aij files to two or more results in fixed-size
.aij files being used. An inaccessible .aij file is counted
in these equations. Therefore, if you have one accessible
.aij file and one inaccessible .aij file (perhaps because
it has been suppressed), fixed-size .aij journaling is still
used.
Because some of the RMU Backup After_Journal qualifiers are
valid only when one or the other journaling mechanism is
employed, you might need to issue an RMU Dump command to
determine which journaling mechanism is currently being
employed before you issue an RMU Backup After_Journal
command.
Also note that once a backup operation begins, .aij file
modification is not allowed until the backup operation is
complete. However, if the type of journaling changes between
the time you issue an RMU Dump command and the time you
issue the RMU Backup After_Journal command, you receive an
error message if you have specified qualifiers that are only
valid with a particular type of journaling mechanism. (The
Threshold qualifier, for example, is valid only when the
extensible journaling mechanism is being used.)
If you back up the .aij file or files to tape, you must mount
the backup media by using the DCL MOUNT command before you issue
the RMU Backup After_Journal command. If you specify the default,
Format=Old_File, the RMU Backup After_Journal command uses RMS
to write to the tape and the tape must be mounted as an OpenVMS
volume. (That is, do not specify the FOREIGN qualifier with the
MOUNT command.) If you specify the Format=New_Tape qualifier,
the RMU Backup After_Journal command writes backup files in a
format similar to that used by the RMU Backup command, and you
must mount the tape as a FOREIGN volume.
If you back up an .aij file to disk, you can then use the OpenVMS
Backup utility (BACKUP) to archive the .aij backup file.
The RMU Backup After_Journal command can be used in a batch job
to avoid occupying an interactive terminal for long periods of
time. The Continuous, Interval, Threshold, and Until qualifiers
control the duration and frequency of the backup process. When
you use the Continuous qualifier, the command can occupy a
terminal indefinitely. Therefore, it is good practice to issue
the command through a batch process when executing a continuous
.aij file backup operation. However, remember that the portion of
the command procedure that follows the RMU Backup After_Journal
command is not executed until after the time specified by the
Until qualifier.
When the RMU Backup After_Journal command completes, it records
information about the state of the backup files in the global
process symbols presented in the following list. You can use
these symbols in DCL command procedures to help automate the
backup operation.
These symbols are not set, however, if you have issued a DCL SET
SYMBOL/SCOPE=(NOLOCAL, NOGLOBAL) command.
o RDM$AIJ_SEQNO
Contains the sequence number of the last .aij backup file
written to tape. This symbol has a value identical to RDM$AIJ_
BACKUP_SEQNO. RDM$AIJ_SEQNO was created prior to Oracle Rdb
Version 6.0 and is maintained for compatibility with earlier
versions of Oracle Rdb.
o RDM$AIJ_CURRENT_SEQNO
Contains the sequence number of the currently active .aij
file. A value of -1 indicates that after-image journaling is
disabled.
o RDM$AIJ_NEXT_SEQNO
Contains the sequence number of the next .aij file that
needs to be backed up. This symbol always contains a positive
integer value (which can be 0).
o RDM$AIJ_LAST_SEQNO
Contains the sequence number of the last .aij file ready for a
backup operation, which is different from the current sequence
number if fixed-size journaling is being used. A value of -1
indicates that no journal has ever been backed up.
If the value of the RDM$AIJ_NEXT_SEQNO symbol is greater than
the value of the RDM$AIJ_LAST_SEQNO symbol, no more .aij files
are currently available for the backup operation.
o RDM$AIJ_BACKUP_SEQNO
Contains the sequence number of the last .aij file backed up
by the backup operation. This symbol is set at the completion
of an .aij backup operation. A value of -1 indicates that this
process has not yet backed up an .aij file.
The RMU Backup After_Journal command provides an informational
message that describes the exact sequence number for each .aij
backup file operation.
o RDM$AIJ_COUNT
Contains the number of available .aij files.
o RDM$AIJ_ENDOFFILE
Contains the end of file block number for the current AIJ
journal.
o RDM$AIJ_FULLNESS
Contains the percent fullness of the current AIJ journal.
Note that these are string symbols, not integer symbols, even
though their equivalence values are numbers. Therefore performing
arithmetic operations with them produces unexpected results.
If you need to perform arithmetic operations with these symbols,
first convert the string symbol values to numeric symbol values
using the OpenVMS F$INTEGER lexical function. For example:
$ SEQNO_RANGE = F$INTEGER(RDB$AIJ_LAST_SEQNO)
- F$INTEGER(RDB$AIJ_NEXT_SEQNO)
7.2.2 – Format
(B)0[m RMU/Backup/After_Journal root-file-spec {backup-file-spec | ""}
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
/[No]Accept_Label x /Accept_Label
/Active_IO=max-writes x /Active_IO=3
/Block_Size=integer x See description
/[No]Compression[=options] x /Nocompression
/[No]Continuous=(n) x /Nocontinuous
/[No]Crc x See description
/Crc[=Autodin_II] x See description
/Crc=Checksum x See description
/Density=(density-value, [No]Compaction) x See description
/[No]Edit_Filename=(options) x /Noedit_Filename
/Encrypt=({Value=|Name=}[,Algorithm=]) x See description
/Format={Old_File|New_Tape} x /Format=Old_File
/[No]Group_Size[=interval] x See description
/[No]Interval=number-seconds x /Nointerval
/Label=(label-name-list) x See description
/Librarian[=options] x None
/Lock_Timeout=seconds x See description
/[No]Log x Current DCL verify value
(B)0[m /[No]Media_Loader x See description
/Owner=user-id x See description
/Prompt={Automatic|Operator|Client} x See description
/Protection=openvms-file-protection x See description
/[No]Quiet_Point x /Quiet_Point
/[No]Rename x /Norename
/[No]Rewind x /Norewind
/[No]Sequence=(n,m) x /Nosequence
/Tape_Expiration=date-time x The current time
/[No]Threshold=disk-blocks x /Nothreshold
/Until=time x See description
/[No]Wait=n x See description
7.2.3 – Parameters
7.2.3.1 – root-file-spec
The name of the database root file. The root file name is also
the name of the database. An error results if you specify a
database that does not have after-image journaling enabled. The
default file extension is .rdb.
7.2.3.2 – backup-file-spec
A file specification for the .aij backup file. The default
file extension is .aij unless you specify the Format=New_Tape
qualifier. In this case, the default file extension is .aij_rbf.
7.2.3.3 – ""
Double quotes indicate to Oracle RMU that you want the default
.aij backup file specification to be used. The default .aij
backup file specification is defined with the SQL ALTER DATABASE
statement or the RMU Set After_Journal command.
7.2.4 – Command Qualifiers
7.2.4.1 – Accept Label
Accept_Label
Specifies that Oracle RMU should keep the current tape label it
finds on a tape during a backup operation even if that label
does not match the default label or that specified with the
Label qualifier. Operator notification does not occur unless
the tape's protection, owner, or expiration date prohibit writing
to the tape. However, a message is logged (assuming logging is
enabled) and written to the backup journal file (assuming you
have specified the Journal qualifier) to indicate that a label is
being preserved and which drive currently holds that tape.
This qualifier is particularly useful when your backup operation
employs numerous previously used (and thus labeled) tapes and you
want to preserve the labels currently on the tapes.
If you do not specify this qualifier, the default behavior
of Oracle RMU is to notify the operator each time it finds a
mismatch between the current label on the tape and the default
label (or the label you specify with the Label qualifier).
See the description of the Labels qualifier under this command
for information on default labels. See How Tapes are Relabeled
During a Backup Operation in the Usage_Notes help entry under
the Backup Database help entry for a summary of which labels are
applied under a variety of circumstances.
7.2.4.2 – Active IO
Active_IO=max-writes
Specifies the maximum number of write operations to a backup
device that the RMU Backup After_Journal command attempts
simultaneously. This is not the maximum number of write
operations in progress; that value is the product of active
system I/O operations and the number of devices being written
to simultaneously.
The value of the Active_IO qualifier can range from 1 to 5. The
default value is 3. Values larger than 3 can improve performance
with some tape drives.
7.2.4.3 – Block Size
Block_Size=integer
Specifies the maximum record size for the backup file. The size
can vary between 2048 and 65,024 bytes. The default value is
device dependent. The appropriate block size is a compromise
between tape capacity and error rate.
7.2.4.4 – Compression
Compression=LZSS
Compression=Huffman
Compression=ZLIB=level
Nocompression
Allows you to specify the compression method to use before
writing data to the AIJ backup file. This reduces performance,
but may be justified when the AIJ backup file is a disk file,
or is being backed up over a busy network, or is being backed
up to a tape drive that does not do its own compression. You
probably do not want to specify the Compression qualifier when
you are backing up an aIJ file to a tape drive that does its
own compression; in some cases doing so can actually result in a
larger file.
This feature only works for the new backup file format and you
have to specify /FORMAT=NEW_TAPE if you also use /COMPRESSION.
If you specify the Compression qualifier without a value, the
default is COMPRESSION=ZLIB=6.
The level value (ZLIB=level) is an integer between 1 and 9
specifying the relative compression level with one being the
least amount of compression and nine being the greatest amount
of compression. Higher levels of the compression use increased
CPU time while generally providing better compression. The
default compression level of 6 is a balance between compression
effectiveness and CPU consumption.
OLDER ORACLE RDB 7.2 RELEASES AND COMPRESSED RBF FILES
Prior releases of Oracle Rdb are unable to read RBF files
compressed with the ZLIB algorithm. In order to read
compressed backups with Oracle Rdb 7.2 Releases prior
to V7.2.1, they must be made with /COMPRESSION=LZSS or
/COMPRESSION=HUFFMAN explicitly specified (because the
default compression algorithm has been changed from LZSS to
ZLIB). Oracle Rdb Version 7.2.1 is able to read compressed
backups using the LZSS or HUFFMAN algorithms made with prior
releases.
7.2.4.5 – Continuous
Continuous=(n)
Nocontinuous
Specifies whether the .aij backup process operates continuously.
You specify termination conditions by specifying one or both of
the following:
o The Until qualifier
Specifies the time and date to stop the continuous backup
process.
o The value for n
Specifies the number of iterations Oracle RMU should make
through the set of active .aij files before terminating the
backup operation.
When you use the Continuous qualifier, you must use either the
Until or the Interval qualifier or provide a value for n (or
both) to specify when the backup process should stop. You can
also stop the backup process by using the DCL STOP command when
backing up to disk.
If you specify the Continuous qualifier, Oracle Rdb does not
terminate the backup process after truncating the current .aij
file (when an extensible journal is used) or after switching to
a new journal (when fixed-size journals are used). Instead, the
backup process waits for the period of time that you specify in
the argument to the Interval qualifier. After that time interval,
the backup process tests to determine if the threshold has been
reached (for an extensible journal) or if the journal is full
(for fixed-size journals). It then performs backup operations
as needed and then waits again until the next interval break,
unless the number of iterations or the condition specified with
the Until qualifier has been reached.
If you specify the Continuous qualifier, the backup process
occupies the terminal (that is, no system prompt occurs) until
the process terminates. Therefore, you should usually enter the
command through a batch process.
If you specify the default, the Nocontinuous qualifier, the
backup process stops as soon as it completely backs up the .aij
file or files. The default value for the number of iterations (n)
is 1.
If you specify both the Until qualifier and the Continuous=n
qualifier, the backup operation stops after whichever completes
first. If you specify the Until=12:00 qualifier and the
Continuous=5 qualifier, the backup operation terminates at 12:00
even if only four iterations have completed. Likewise, if five
iterations are completed prior to 12:00, the backup operation
terminates after the five iterations are completed.
The Continuous qualifier is not recommended when you are backing
up to tape, particularly when the Format=New_Tape qualifier is
used. If your tape operations complete successfully, you do not
want the backup operation to continue in an infinite loop.
Using the DCL STOP command to terminate a backup operation to
tape might result in an incomplete or corrupt backup file.
However, do not delete this backup file; it is extremely
important that you preserve all .aij backup files, even
those produced by failed or terminated backup processes. If
the resultant .aij backup file is discarded, the next .aij
backup file could contain a "gap" in transactions, so that no
transactions would ever be rolled forward from that point on.
7.2.4.6 – Crc[=Autodin II]
Crc[=Autodin_II]
Uses the AUTODIN-II polynomial for the 32-bit CRC calculation and
provides the most reliable end-to-end error detection. This is
the default for NRZ/PE (800/1600 bits/inch) tape drives.
Typing Crc is sufficient to select the Crc=Autodin_II qualifier.
It is not necessary to type the entire qualifier.
7.2.4.7 – Crc=Checksum
Crc=Checksum
Uses one's complement addition, which is the same computation
used to do a checksum of the database pages on disk. This is the
default for GCR (6250 bits/inch) tape drives and for TA78, TA79,
and TA81 tape drives.
The Crc=Checksum qualifier allows detection of errors.
7.2.4.8 – Nocrc
Nocrc
Disables end-to-end error detection. This is the default for TA90
(IBM 3480 class) drives.
NOTE
The overall effect of the Crc=Autodin_II, Crc=Checksum, and
Nocrc qualifier defaults is to improve tape reliability so
that it is equal to that of a disk. If you retain your tapes
longer than 1 year, the Nocrc default might not be adequate.
For tapes retained longer than 1 year, use the Crc=Checksum
qualifier.
If you retain your tapes longer than 3 years, you should
always use the Crc=Autodin_II qualifier.
Tapes retained longer than 5 years could be deteriorating
and should be copied to fresh media.
See the Oracle Rdb Guide to Database Maintenance for details
on using the Crc qualifiers to avoid underrun errors.
7.2.4.9 – Density
Density=(density-value,[No]Compaction)
Specifies the density at which the output volume is to be
written. The default value is the format of the first volume (the
first tape you mount). You do not need to specify this qualifier
unless your tape drives support data compression or more than one
recording density.
The Density qualifier is applicable only to tape drives. Oracle
RMU returns an error message if this qualifier is used and the
target device is not a tape drive.
If your systems are running OpenVMS versions prior to 7.2-1,
specify the Density qualifier as follows:
o For TA90E, TA91, and TA92 tape drives, specify the number in
bits per inch as follows:
- Density = 70000 to initialize and write tapes in the
compacted format.
- Density = 39872 or Density = 40000 for the noncompacted
format.
o For SCSI (Small Computer System Interface) tape drives,
specify Density = 1 to initialize and write tapes using the
drive's hardware data compression scheme.
o For other types of tape drives, you can specify a supported
Density value between 800 and 160000 bits per inch.
o For all tape drives, specify Density = 0 to initialize and
write tapes at the drive's standard density.
Do not use the Compaction or NoCompaction keyword for systems
running OpenVMS versions prior to 7.2-1. On these systems,
compression is determined by the density value and cannot be
specified.
Oracle RMU supports the OpenVMS tape density and compression
values introduced in OpenVMS Version 7.2-1. The following table
lists the added density values supported by Oracle RMU.
DEFAULT 800 833 1600
6250 3480 3490E TK50
TK70 TK85 TK86 TK87
TK88 TK89 QIC 8200
8500 8900 DLT8000
SDLT SDLT320 SDLT600
DDS1 DDS2 DDS3 DDS4
AIT1 AIT2 AIT3 AIT4
LTO2 LTO3 COMPACTION NOCOMPACTION
If the OpenVMS Version 7.2-1 density values and the previous
density values are the same (for example, 800, 833, 1600, 6250),
the specified value is interpreted as an OpenVMS Version 7.2-1
value if the tape device driver accepts them, and as a previous
value if the tape device driver accepts previous values only.
For the OpenVMS Version 7.2-1 values that accept tape compression
you can use the following syntax:
/DENSITY = (new_density_value,[No]Compaction)
In order to use the Compaction or NoCompaction keyword, you must
use one of the following density values that accepts compression:
DEFAULT 3480 3490E 8200
8500 8900 TK87 TK88
TK89 DLT8000 SDLT SDLT320
AIT1 AIT2 AIT3 AIT4
DDS1 DDS2 DDS3 DDS4
SDLT600 LTO2 LTO3
Refer to the OpenVMS documentation for more information about
density values.
7.2.4.10 – Edit Filename
Edit_Filename=(Options)
Noedit_Filename
When the Edit_Filename=(options) qualifier is used, the specified
backup file name is edited by appending any or all of the values
specified by the following options to the backup file name:
o Day_Of_Week
The current day of the week expressed as a 1-digit integer (1
to 7). Sunday is expressed as 1; Saturday is expressed as 7.
o Day_Of_Year
The current day of the year expressed as a 3-digit integer
(001 to 366).
o Hour
The current hour of the day expressed as a 2-digit integer (00
to 23).
o Julian_Date
The number of days passed since 17-Nov-1858.
o Minute
The current minute of the hour expressed as a 2-digit integer
(00 to 59).
o Month
The current month expressed as a 2-digit integer (01 to 12).
o Sequence
The journal sequence number of the first journal in the backup
operation.
o Vno
Synonymous with the Sequence option. See the description of
the Sequence option.
o Year
The current year (A.D.) expressed as a 4-digit integer.
If you specify more than one option, place a comma between each
option.
The edit is performed in the order specified. For example, the
file backup.aij and the qualifier /EDIT_FILENAME=(HOUR, MINUTE,
MONTH, DAY_OF_MONTH, SEQUENCE) creates a file with the name
backup_160504233.aij when journal 3 is backed up at 4:05 P.M.
on April 23rd.
You can make the name more readable by inserting quoted strings
between each Edit_Filename option. For example, the following
qualifier adds the string "$30_0155-2" to the .aij file name
if the day of the month is the 30th, the time is 1:55 and the
version number is 2:
/EDIT_FILENAME=("$",DAY_OF_MONTH,"_",HOUR,MINUTE,"-",SEQUENCE)
This qualifier is useful for creating meaningful file names for
your backup files and makes file management easier.
The default is the Noedit_Filename qualifier.
7.2.4.11 – Encrypt
Encrypt=({Value=|Name=}[,Algorithm=])
The Encrypt qualifier encrypts the backup file of the after image
journal.
Specify a key value as a string or, the name of a predefined
key. If no algorithm name is specified the default is DESCBC.
For details on the Value, Name and Algorithm parameters see HELP
ENCRYPT.
This feature requires the OpenVMS Encrypt product to be installed
and licensed on this system.
This feature only works for a newer format backup file which
has been created using /FORMAT=NEW_TAPE. Therefore you have
to specify /FORMAT=NEW_TAPE with this command if you also use
/ENCRYPT.
Synonymous with Format=Old_File and Format=New_Tape qualifiers.
See the description of those qualifiers.
7.2.4.12 – Format
Format=Old_File
Format=New_Tape
Specifies the format in which the backup file is to be written.
Oracle Corporation recommends that you specify the Format=Old_
File qualifier (or accept the default) when you back up your .aij
file to disk and that you specify the Format=New_Tape qualifier
when you back up your .aij file to tape.
If you specify the default, the Format=Old_File qualifier, the
RMU Backup command writes the file in a format that is optimized
for a file structured disk. If you specify the Format=New_Tape
qualifier, the Oracle RMU command writes the file in a format
that is optimized for tape storage, including ANSI/ISO labeling
and end-to-end error detection and correction. When you specify
the Format=New_Tape qualifier and back up the .aij file to tape,
you must mount the backup media by using the DCL MOUNT command
before you issue the RMU Backup After_Journal command. The tape
must be mounted as a FOREIGN volume. If you mount the tape as an
OpenVMS volume (that is, you do not mount it as a FOREIGN volume)
and you specify the Format=New_Tape qualifier, you receive an
RMU-F-MOUNTFOR error.
When you back up your .aij file to tape and specify the
Format=New_Tape qualifier you can create a backup copy of the
database (using the RMU Backup command) and a backup of the
.aij file (using the RMU Backup After_Journal command) without
dismounting your tape.
The following tape qualifiers have meaning only when used in
conjunction with the Format=New_Tape qualifier:
Active_IO
Block_Size
Crc
Density
Group_Size
Label
Owner
Protection
Rewind
Tape_Expiration
The Format=New_Tape and the Noquiet_Point qualifiers cannot be
used on the same Oracle RMU command line. See the Usage Notes
Help entry for an explanation.
The default file specification, when you specify the Format=New_
Tape qualifier is .aij_rbf. The default file specification, when
you specify the Format=Old_File qualifier is .aij.
Although Oracle Corporation recommends that you specify the
Format=New_Tape qualifier for .aij backup operations to tape
and the Format=Old_File qualifier for .aij backup operations to
disk, Oracle RMU does not enforce this recommendation. This is to
provide compatibility with prior versions of Oracle Rdb. See the
Usage Notes Help entry for issues and problems you can encounter
when you do not follow this recommendation.
7.2.4.13 – Group Size
Group_Size[=interval]
Nogroup_Size
Specifies the frequency at which XOR recovery blocks are written
to tape. The group size can vary from 0 to 100. Specifying a
group size of zero or specifying the Nogroup_Size qualifier
results in no XOR recovery blocks being written. The Group_Size
qualifier is only applicable to tape, and its default value is
device dependent. Oracle RMU returns an error message if this
qualifier is used and the target device is not a tape device.
7.2.4.14 – Interval=n
Interval=number-seconds
Nointerval
Specifies the number of seconds for which the backup process
waits. Use this qualifier in conjunction with the Continuous
qualifier and the extensible journaling method. The interval
determines how often to test the active .aij file to determine
if it contains more blocks than the value of the Threshold
qualifier.
If you specify the Interval qualifier without specifying the
number of seconds, or if you omit this qualifier, the default
number of seconds is 60.
Oracle Corporation recommends using the default (Interval=60)
initially because the interval that you choose can affect the
performance of the database. In general, you can arrive at a
good interval time on a given database only by judgment and
experimentation.
If you specify the Nointerval qualifier, the active .aij file is
tested repeatedly with no interval between finishing one cycle
and beginning the next.
You must specify the Continuous qualifier if you specify either
the Interval or Nointerval qualifier.
If you specify both the Interval and Nocontinuous qualifiers, the
Interval qualifier is ignored.
7.2.4.15 – Label
Label=(label-name-list)
Specifies the 1- to 6-character string with which the volumes
of the backup file are to be labeled. The Label qualifier is
applicable only to tape volumes. You must specify one or more
label names when you use the Label qualifier.
You can specify a list of tape labels for multiple tapes. If you
list multiple tape label names, separate the names with commas
and enclose the list of names within parentheses.
If you do not specify the Label (or Accept_Label) qualifier,
Oracle RMU labels the first tape used for a backup operation
with the first 6 characters of the backup file name. Subsequent
default labels are the first 4 characters of the backup file name
appended with a sequential number. For example, if your backup
file is my_backup.rbf, the default tape labels are my_b, my_b01,
my_b02, and so on.
When you reuse tapes, Oracle RMU compares the label currently
on the tape to the label or labels you specify with the Label
qualifier. If there is a mismatch between the existing label and
a label you specify, Oracle RMU sends a message to the operator
asking if the mismatch is acceptable (unless you also specify the
Accept_Labels qualifier).
If desired, you can explicitly specify the list of tape labels
for multiple tapes. If you list multiple tape label names,
separate the names with commas and enclose the list of names
within parentheses. If you are reusing tapes be certain that
you load the tapes so that the label Oracle RMU expects and the
label on each tape will match, or be prepared for a high level of
operator intervention.
If you specify fewer labels than are needed, Oracle RMU generates
labels based on the format you have specified. For example, if
you specify Label=TAPE01, Oracle RMU labels subsequent tapes as
TAPE02, TAPE03, and so on up to TAPE99. Thus, many volumes can
be preloaded in the cartridge stacker of a tape drive. The order
is not important because Oracle RMU relabels the volumes. An
unattended backup operation is more likely to be successful if
all the tapes used do not have to be mounted in a specific order.
Once the backup operation is complete, externally mark the tapes
with the appropriate label so that the order can be maintained
for the restore operation. Be particularly careful if you are
allowing Oracle RMU to implicitly label second and subsequent
tapes and you are performing an unattended backup operation.
Remove the tapes from the drives in the order in which they
were written. Apply labels to the volumes following the logic
of implicit labeling (for example, TAPE02, TAPE03, and so on).
Oracle Corporation recommends you use the Journal qualifier when
you employ implicit labeling in a multidrive, unattended backup
operation. The journal file records the volume labels that were
written to each tape drive. The order in which the labels were
written is preserved in the journal. Use the RMU Dump Backup
command to display a listing of the volumes written by each tape
drive.
You can use an indirect file reference with the Label qualifier.
See the Indirect-command-files help entry for more information.
See How Tapes are Relabeled During a Backup Operation in the
Usage_Notes help entry under this command for a summary of which
labels are applied under a variety of circumstances.
7.2.4.16 – Librarian
Librarian=options
Use the Librarian qualifier to back up files to data archiving
software applications that support the Oracle Media Management
interface. The backup file name specified on the command line
identifies the stream of data to be stored in the Librarian
utility. If you supply a device specification or a version number
it will be ignored.
The Librarian qualifier accepts the following options:
o Trace_file=file-specification
The Librarian utility writes trace data to the specified file.
o Level_Trace=n
Use this option as a debugging tool to specify the level of
trace data written by the Librarian utility. You can use a
pre-determined value of 0, 1, or 2, or a higher value defined
by the Librarian utility. The pre-determined values are :
- Level 0 traces all error conditions. This is the default.
- Level 1 traces the entry and exit from each Librarian
function.
- Level 2 traces the entry and exit from each Librarian
function, the value of all function parameters, and the
first 32 bytes of each read/write buffer, in hexadecimal.
o Logical_Names=(logical_name=equivalence-value,...)
You can use this option to specify a list of process logical
names that the Librarian utility can use to specify catalogs
or archives where Oracle Rdb backup files are stored,
Librarian debug logical names, and so on. See the specific
Librarian documentation for the definition of logical names.
The list of process logical names is defined by Oracle RMU
prior to the start of any Oracle RMU command that accesses the
Librarian utility.
The following OpenVMS logical names must be defined for use with
a Librarian utility before you execute an Oracle RMU backup or
restore operation. Do not use the Logical_Names option provided
with the Librarian qualifier to define these logical names.
o RMU$LIBRARIAN_PATH
This logical name must be defined so that the shareable
Librarian image can be loaded and called by Oracle RMU backup
and restore operations. The translation must include the file
type (for example, .exe), and must not include a version
number. The shareable Librarian image must be an installed
(known) image. See the Librarian utility documentation for
the name and location of this image and how it should be
installed.
o RMU$DEBUG_SBT
This logical name is not required. If it is defined, Oracle
RMU will display debug tracing information messages from
modules that make calls to the Librarian shareable image.
You cannot use device specific qualifiers such as Rewind,
Density, or Label with the Librarian qualifier because the
Librarian utility handles the storage meda, not Oracle RMU.
7.2.4.17 – Lock Timeout
Lock_Timeout=seconds
Determines the maximum time the .aij file backup operation
will wait for the quiet-point lock and any other locks needed
during online backup operations. When you specify the Lock_
Timeout=seconds qualifier, you must specify the number of seconds
to wait for the quiet-point lock. If the time limit expires, an
error is signaled and the backup operation fails.
When the Lock_Timeout=seconds qualifier is not specified, or if
the value specified is 0, the .aij file backup operation waits
indefinitely for the quiet-point lock and any other locks needed
during an online operation.
The Lock_Timeout=seconds qualifier is ignored if the Noquiet_
Point qualifier is specified.
7.2.4.18 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request log output and
the Nolog qualifier to prevent it. If you specify neither, the
default is the current setting of the DCL verify switch. (The DCL
SET VERIFY command controls the DCL verify switch.)
7.2.4.19 – Media Loader
Media_Loader
Nomedia_Loader
Use the Media_Loader qualifier to specify that the tape device
receiving the backup file has a loader or stacker. Use the
Nomedia_Loader qualifier to specify that the tape device does
not have a loader or stacker.
By default, if a tape device has a loader or stacker, Oracle
RMU should recognize this fact. However, occasionally Oracle RMU
does not recognize that a tape device has a loader or stacker.
Therefore, when the first backup tape fills, Oracle RMU issues a
request to the operator for the next tape, instead of requesting
the next tape from the loader or stacker. Similarly, sometimes
Oracle RMU behaves as though a tape device has a loader or
stacker when actually it does not.
If you find that Oracle RMU is not recognizing that your tape
device has a loader or stacker, specify the Media_Loader
qualifier. If you find that Oracle RMU expects a loader or
stacker when it should not, specify the Nomedia_Loader qualifier.
Synonymous with Owner qualifier. See the description of the Owner
qualifier.
7.2.4.20 – Owner
Owner=user-id
Specifies the owner of the tape volume set. The owner is the
user who will be permitted to restore the database. The user-
id parameter must be one of the following types of OpenVMS
identifier:
o A user identification code (UIC) in [group-name,member-name]
alphanumeric format
o A UIC in [group-number,member-number] numeric format
o A general identifier, such as SECRETARIES
o A system-defined identifier, such as DIALUP
The Owner qualifier cannot be used with a backup operation to
disk. When used with tapes, the Owner qualifier applies to
all continuation volumes. Unless the Rewind qualifier is also
specified, the Owner qualifier is not applied to the first
volume. If the Rewind qualifier is not specified, the backup
operation appends the file to a previously labeled tape, so
the first volume can have a protection different from the
continuation volumes.
7.2.4.21 – Prompt
Prompt=Automatic
Prompt=Operator
Prompt=Client
Specifies where server prompts are to be sent. When you specify
Prompt=Automatic, prompts are sent to the standard input device,
and when you specify Prompt=Operator, prompts are sent to the
server console. When you specify Prompt=Client, prompts are sent
to the client system.
7.2.4.22 – Protection
Protection=file-protection
Specifies the system file protection for the backup file produced
by the RMU Backup After_Journal command.
The default file protection varies, depending on whether you
backup the file to disk or tape. This is because tapes do not
allow delete or execute access and the SYSTEM account always
has both read and write access to tapes. In addition, a more
restrictive class accumulates the access rights of the less
restrictive classes.
If you do not specify the Protection qualifier, the default
protection is as follows:
o S:RWED,O:RE,G,W if the backup is to disk
o S:RW,O:R,G,W if the backup is to tape
If you specify the Protection qualifier explicitly, the
differences in protection applied for backups to tape or disk
as noted in the preceding paragraph are applied. Thus, if you
specify Protection=(S,O,G:W,W:R), that protection on tape becomes
(S:RW,O:RW,G:RW,W:R).
7.2.4.23 – Quiet Point
Quiet_Point
Noquiet_Point
Specifies whether the quiet-point lock will be acquired when an
.aij backup operation is performed. The default is the Quiet_
Point qualifier. Use of the Quiet_Point qualifier is meaningful
only for a full backup operation; that is, a backup operation
that makes a complete pass through all .aij files ready for
backup as opposed to one which is done by-sequence (specified
with the Sequence qualifier). A full .aij backup operation can
be performed regardless of whether an extensible or a fixed-size
.aij journaling mechanism is being employed.
Each .aij backup operation is assigned an .aij sequence number.
This labeling distinguishes each .aij backup file from previous
.aij backup files. During a recovery operation, it is important
to apply the .aij backup files in the proper sequence. The RMU
Recover command checks the database root file structure and
displays a message telling you the .aij sequence number with
which to begin the recovery operation.
The quiet point is a state where all write transactions
have either been committed or rolled back and no read/write
transactions are in progress. This ensures that the recording
of transactions do not extend into a subsequent .aij backup file.
This backup file can then be used to produce a recovered database
that is in the same state as when the quiet point was reached.
When fixed-size journaling is employed, the Quiet_Point qualifier
is only relevant when the active .aij file is being backed up. In
this case, a quiet point is acquired only once, regardless of the
number of .aij files being backed up.
There is no natural quiet point if someone is writing or waiting
to write to the database at any given time. (A natural quiet
point is one that is not instigated by the use of the QP (quiet
point) Lock.) The .aij backup operation may never be able to
capture a state that does not have uncommitted data in the
database. As a result, the Noquiet_Point qualifier creates .aij
backup files that are not independent of one another. If you
apply one .aij backup file to the database without applying the
next .aij backup file in sequence, the recovery operation will
not be applied completely.
See the Usage_Notes help entry under this command for
recommendations on using the Quiet_Point and Noquiet_Point
qualifiers.
The following combination of qualifiers on the same command line
are invalid:
o Quiet_Point and Sequence
o Quiet_Point and Wait
o Noquiet_Point and Format=New
7.2.4.24 – Rename
Rename
Norename
The Rename qualifier creates and initializes a new .aij file and
creates the backup file by renaming the original .aij file. The
effect is that the original .aij file has a new name and the new
.aij file has the same name as the original .aij file.
The Rename qualifier sets the protection on the renamed backup
file so that you can work with it as you would any backup
file. You can specify the new name by using the Edit_Filename
qualifier.
When the Rename qualifier is used, the backup operation is faster
(than when Norename, the default, is specified) because the
duration of the backup operation is the total time required to
rename and initialize the .aij file; the data copy portion of
the backup (reading and writing) is eliminated. However, the disk
containing the .aij file must have sufficient space for both the
new and original .aij files. Note also that the .aij backup file
name must not include a device specification.
NOTE
If there is insufficient space for both the new and original
.aij files when the Rename qualifier is specified, after-
image journaling shutdown is invoked, resulting in a
complete database shutdown.
The Rename qualifier can be used with both fixed-size and
extensible journaling files.
The Norename qualifier copies the contents of the .aij file on
tape or disk and initializes the original .aij file for reuse.
The Norename qualifier results in a slower backup operation (than
when Rename is specified), but it does not require space on the
journal disk for both new and original .aij files.
The default is Norename.
7.2.4.25 – Rewind
Rewind
Norewind
Specifies that the magnetic tape that contains the backup file
will be rewound before processing begins. The tape is initialized
according to the Label and Density qualifiers. The Norewind
qualifier is the default and causes the backup file to be created
starting at the current logical end-of-tape (EOT).
These qualifiers are applicable only to tape devices.
7.2.4.26 – Sequence
Sequence=(n,m)
Nosequence
Specifies that the journals with sequence numbers from n to m
inclusive are to be backed up. The values n and m are interpreted
or interpolated as follows:
o If Sequence = (33, 35) is specified, then the .aij files with
sequence numbers 33, 34, and 35 are backed up.
o If Sequence = (53, 53) is specified, then the .aij file with
sequence number 53 is backed up.
o If Sequence = (53) is specified, then the .aij files with
sequence numbers 53 and lower are backed up, if they have
not been backed up already. For example, if .aij files with
sequence numbers 51, 52, and 53 have not been backed up, then
Sequence = (53) results in these three .aij files being backed
up.
o If Sequence = (55, 53) is specified, then .aij files with
sequence numbers 53, 54, and 55 are backed up.
o If the Sequence qualifier is specified without a value list,
both n and m are set to the sequence number of the next
journal that needs to be backed up.
The default is the Nosequence qualifier. When the default is
accepted, the backup operation starts with the next journal that
needs to be backed up and stops when the termination condition
you have specified is reached.
The following qualifiers cannot be used or have no effect when
used with the Sequence qualifier:
Continuous
Format=New_Tape
Interval
Quiet_Point
Threshold
Until
Furthermore, fixed-size after-image journals must be in use when
this qualifier is specified.
7.2.4.27 – Tape Expiration
Tape_Expiration=date-time
Specifies the expiration date of the .aij backup file. Note that
when Oracle RMU reads a tape, it looks at the expiration date
in the file header of the first file on the tape and assumes
the date it finds in that file header is the expiration date
for the entire tape. Therefore, if you are backing up an .aij
file to tape, specifying the Tape_Expiration qualifier only has
meaning if the .aij file is the first file on the tape. You can
guarantee that the .aij file will be the first file on the tape
by specifying the Rewind qualifier and overwriting any existing
files on the tape.
When the first file on the tape contains an expiration date
in the file header, you cannot overwrite the tape before the
expiration date unless you have the OpenVMS SYSPRV or BYPASS
privilege.
Similarly, when you attempt to perform a recover operation with
an .aij file on tape, you cannot perform the recover operation
after the expiration date recorded in the first file on the tape
unless you have the OpenVMS SYSPRV or BYPASS privilege
By default, no expiration date is written to the .aij file
header. In this case, if the .aij file is the first file on the
tape, the tape can be overwritten immediately. If the .aij file
is not the first file on the tape, the ability to overwrite the
tape is determined by the expiration date in the file header of
the first file on the tape.
You cannot explicitly set a tape expiration date for an entire
volume. The volume expiration date is always determined by
the expiration date of the first file on the tape. The Tape_
Expiration qualifier cannot be used with a backup operation to
disk.
See the Oracle Rdb Guide to Database Maintenance for information
on tape label processing.
7.2.4.28 – Threshold
Threshold=disk-blocks
Nothreshold
This qualifier can be used only when extensible journaling is
enabled. It cannot be used with fixed-size journaling.
The Threshold qualifier sets an approximate limit on the size
of the active .aij file. When the size of the active .aij file
exceeds the threshold, you cannot initiate new transactions
until the backup process finishes backing up and truncating
(resetting) the active .aij file. During the backup operation,
existing transactions can continue to write to the .aij file.
Before new transactions can start, all activity issuing from
existing transactions (including activity occurring after the
threshold is exceeded) must be moved from the active .aij disk
file to the .aij backup file. At that time, the active .aij file
will be completely truncated.
If you use the default, the Nothreshold qualifier, each backup
cycle will completely back up the active .aij file. Oracle
Corporation recommends using the Nothreshold qualifier.
An appropriate value for the Threshold qualifier depends on the
activity of your database, how much disk space you want to use,
whether backup operations will be continuous, and how long you
are willing to wait for a backup operation to complete.
See the Oracle Rdb7 Guide to Database Performance and Tuning for
more information on setting SPAM thresholds.
7.2.4.29 – Until
Until=time
Specifies the approximate future time and date to stop the
continuous backup process. There is no default.
7.2.4.30 – Wait
Wait=n
Nowait
Specifies whether the backup operation should wait (the Wait
qualifier) or terminate (the Nowait qualifier) when it encounters
a journal that is not ready to be backed up. The value specified
for the Wait qualifier is the time interval in seconds between
attempts to back up the journal that was not ready.
The Wait or Nowait qualifier can only be specified if the
Sequence qualifier is also specified. When the Wait qualifier is
specified, the default value for the time interval is 60 seconds.
The default is the Nowait qualifier.
7.2.5 – Usage Notes
o To use the RMU Backup After_Journal command for a database,
you must have the RMU$BACKUP privilege in the root file access
control list (ACL) for the database or the OpenVMS SYSPRV or
BYPASS privilege.
o See the Oracle Rdb7 Guide to Database Performance and Tuning
for information on how to enhance the performance of the RMU
Backup After_Journal command.
NOTE
When fast commit is enabled and an extensible .aij file
configuration is used, the after-image journal backup
process compresses and retains some fraction of the
original .aij file (in a new version of the current .aij
file). This fraction can approach 100% of the original
size. Therefore, be sure to reserve enough space to
duplicate the maximum size .aij file before backing it
up.
Oracle Corporation recommends that you schedule .aij
backup operations with sufficient frequency and check the
free space and journal file size periodically; you need
to know when you are approaching a critical situation in
terms of free space. (This is good practice whether or
not you have fast commit enabled.)
However, if you issue the RMU Backup After_Journal
command with fast commit enabled and find that you
have insufficient space for the .aij file, you have the
following options:
o Delete unneeded files to create sufficient space on
the disk where the .aij file is located.
o Temporarily disable fast commit and back up the .aij
file.
o Close the database, disable after-image journaling,
enable a new after-image journal file, and perform a
backup operation. (The database can be opened either
before or after the backup operation.)
o Close the database. Create a bound volume set or
stripe set that is large enough for the .aij file
and copy the .aij file there. Use the RMU Set After_
Journal command to change the .aij file name (or
redefine the logical name if one was used to locate
the journal), and then open the database again.
o Note the following issues and problems you can encounter when
you specify the Format=Old_File qualifier for an .aij backup
operation to tape or the Format=New_Tape qualifier for an .aij
backup operation to disk:
- If you use the Format=Old_File qualifier for an .aij
backup operation to tape and the tape is mounted as a
FOREIGN volume, the result is an unlabeled tape that can
be difficult to use for recovery operations.
Therefore, if you use the Format=Old_File qualifier with
an .aij backup operation to tape, you must mount the tape
as an OpenVMS volume (that is, do not specify the /FOREIGN
qualifier with the DCL MOUNT command).
- You must remember (or record) the format you use when you
back up your .aij file and specify that same format when
you issue an RMU Dump After_Journal, RMU Optimize After_
Journal, or RMU Recover command for the .aij backup file.
If you always follow the guidelines of specifying
Format=New_Tape for tape backups and Format=Old_File for
disk backups, you do not need to track the format you
specified for the .aij backup operation for future use
with the other Oracle RMU .aij commands.
- If you specify Format=Old_File for a backup operation
to tape and the .aij spans tape volumes, you might have
problems recovering the .aij file.
o You can use the RMU Backup After_Journal command to save disk
space by spooling the .aij file to tape.
o When you use extensible .aij files, note that although a new
version of the .aij file might be created when the after-image
backup operation begins, the old .aij file continues to be
active and growing. Until the switch occurs (which could be
several hours after the creation of the new version of the
.aij file), the old .aij file is still being accessed. For
this and other reasons, you should never use the DCL DELETE or
DCL PURGE on .aij files (or any database files).
o The following list provides usage information for the Quiet_
Point and Noquiet_Point qualifiers:
- If the backup operation stalls when you attempt a quiet-
point Oracle RMU backup operation, it may be because
another user is holding the quiet-point lock. In some
cases, there is no way to avoid this stall. However, you
may find the stall is caused by a user who has previously
issued and completed a read-write transaction, and is
currently running a read-only transaction. When this user
started the read-write transaction his or her process
acquired the quiet-point lock. Ordinarily, such a process
retains this lock until it detaches from the database.
You can set the RDM$BIND_SNAP_QUIET_POINT logical name to
control whether or not such a process retains the quiet-
point lock. Set the value of the logical name to "1" to
allow such a process to hold the quiet-point lock until
they detach from the database. Set the value of the logical
name to "0", to ensure that the process releases the quiet-
point lock prior to starting a read-only transaction.
- When devising your backup strategy for both the database
and the after-image journal files, keep in mind the trade-
offs between performing quiet-point backup operations and
noquiet-point backup operations. A noquiet-point backup
operation is quicker than a quiet-point backup operation,
but usually results in a longer recovery operation. Because
transactions can span .aij files when you perform noquiet-
point .aij backup operations, you might have to apply
numerous .aij files to recover the database. In a worst-
case scenario, this could extend back to your last quiet-
point .aij or database backup operation. If you rarely
perform quiet-point backup operations, recovery time could
be excessive.
One method you can use to balance these trade-offs is
to perform regularly scheduled quiet-point .aij backup
operations followed by noquiet-point database backup
operations. (You could do the converse, but a quiet-
point backup of the .aij file improves the performance
of the recovery operation should such an operation become
necessary.) Periodically performing a quiet-point .aij
backup operation helps to ensure that your recovery time
will not be excessive.
- You cannot specify the Noquiet_Point qualifier with the
Format=New_Tape qualifier because an .aij file created with
the Noquiet_Point qualifier does not end on a quiet point.
Some transactions can bridge several backup files. When
you recover from these backup files you frequently must
apply several backup files in the same RMU Recover command.
However, the RMU Recover command with the Format=New_Tape
qualifier can only process one backup file at a time, so it
cannot support backup files created with the Noquiet_Point
qualifier.
o Oracle RMU tape operations do not automatically allocate the
tape drives used. In an environment where many users compete
for a few tape drives, it is possible for another user to
seize a drive while Oracle RMU is waiting for you to load the
next tape volume.
To prevent this, issue a DCL ALLOCATE command for the drives
you will be using before you issue the Oracle RMU command,
and then issue a DCL DEALLOCATE command after you complete the
Oracle RMU command.
o The Label qualifier can be used with indirect file
reference. See the Indirect-Command-Files help entry for more
information.
o If an .aij backup process fails or is terminated prematurely,
the user might discard the resultant .aij backup file because
the backup operation was not completed. However, all .aij
backup files, including those produced by a failed backup
process, are necessary to recover a database. If an .aij
backup file of a failed backup process is discarded, the
database is not recoverable from that point forward. This
is especially important if you use magnetic tapes as the .aij
backup media; in this case, preserve this magnetic tape and do
not reuse it.
o When an .aij backup process, especially one running in
continuous (Continuous) mode, writes to the .aij backup file,
it is possible for the transferred data to be deleted from the
database .aij file. If the backup process subsequently fails
or is prematurely terminated (for example with Ctrl/Y or the
DCL STOP command), it might not be possible to retransfer the
data to the subsequent .aij backup file because the data was
deleted from the active database .aij file.
Therefore, it is extremely important that you preserve all
.aij backup files, even those produced by failed or terminated
backup processes. If the resultant .aij backup file is
discarded, the next .aij backup file could contain a "gap"
in transactions, so that no transactions would ever be rolled
forward from that point on.
This problem is more severe when backing up directly to tape.
Therefore, when backing up to tape, you should back up one
journal at a time, rather than using an open-ended or long-
duration backup operation.
NOTE
If this problem occurs, the database is not inconsistent
or corrupt. Rather, the database cannot be rolled forward
past the discarded .aij backup file.
The solution to this problem is to preserve all .aij backup
files to ensure that a database can be completely recovered.
If you have discarded an .aij backup file, perform a full and
complete database backup operation immediately to ensure that
the database can be restored up to the current transaction.
o When an AIJ backup operation completes, the after-image
journal files are initialized with a pattern of -1 (hex
FF) bytes. This initialization is designed to be as fast as
possible. It fully utilizes the I/O subsystem by performing
many large asynchronous I/O operations at once. However, this
speed can come at the cost of a high load on I/O components
during the initialization. This load could slow down other I/O
operations on the system.
You can use two logical names to control the relative I/O load
that the AIJ initialization operation places on the system.
If you define these logical names in the system logical
name table, they are translated each time an AIJ file is
initialized.
The RDM$BIND_AIJ_INITIALIZE_IO_COUNT logical name specifies
the number of asynchronous I/O operations that are queued at
once to the AIJ file. If the logical name is not defined, the
default value is 15, the minimum value is 1, and the maximum
value is 32.
The RDM$BIND_AIJ_INITIALIZE_IO_SIZE logical name controls
the number of 512-byte disk blocks to be written per I/O
operation. If the logical name is not defined, the default
value is 127, the minimum value is 4, and the maximum value is
127.
Reducing the value of either logical will probably increase
the amount of time needed to initialize the AIJ file after a
backup. However, it may also reduce load on the I/O subsystem.
o You should use the density values added in OpenVMS Version
7.2-1 for OpenVMS tape device drivers that accept them because
previously supported values may not work as expected. If
previously supported values are specified for drivers that
support the OpenVMS Version 7.2-1 density values, the older
values are translated to the Version 7.2-1 density values if
possible. If the value cannot be translated, a warning message
is generated, and the specified value is used.
If you use density values added in OpenVMS Version 7.2-1 for
tape device drivers that do not support them, the values are
translated to acceptable values if possible. If the value
cannot be translated, a warning message is generated and the
density value is translated to the existing default internal
density value (MT$K_DEFAULT).
One of the following density-related errors is generated if
there is a mismatch between the specified density value and
the values that the tape device driver accepts:
%DBO-E-DENSITY, TAPE_DEVICE:[000000]DATABASE.BCK; does not support
specified density
%DBO-E-POSITERR, error positioning TAPE_DEVICE:
%DBO-E-BADDENSITY, The specified tape density is invalid for
this device
o If you want to use an unsupported density value, use the VMS
INITIALIZE and MOUNT commands to set the tape density. Do not
use the Density qualifier.
o When you use the RMU Backup After_Journal command with the
Log qualifier, the DCL global symbol RDM$AIJ_LAST_OUTPUT_FILE
is automatically created. The value of the symbol is the full
output backup AIJ file specification.
o Because data stream names representing the database are
generated based on the backup file name specified for the
Oracle RMU backup command, you must either use a different
backup file name to store the next backup of the database
to the Librarian utility or first delete the existing data
streams generated from the backup file name before the same
backup file name can be reused.
To delete the existing data streams stored in the Librarian
utility, you can use a Librarian management utility or the
Oracle RMU Librarian/Remove command.
o The system logical RDM$BIND_AIJBCK_CHECKPOINT_TIMEOUT can
be configured to control the checkpoint stall duration
independent of the AIJ shutdown parameter. This logical works
for both the AIJ backup and Automatic Backup Server (ABS)
utilities.
7.2.6 – Examples
Example 1
Assuming that you have enabled after-image journaling for the MF_
PERSONNEL database, the following command causes extensible .aij
entries to be backed up continuously until the time specified.
$ RMU/BACKUP/AFTER_JOURNAL/CONTINUOUS/THRESHOLD=500 -
_$ /INTERVAL=300/UNTIL="01-JUL-1996 16:15:00.00" -
_$ MF_PERSONNEL.RDB DISK12:[PERS_AIJ]BU_PERSONNEL.AIJ
Every 300 seconds, the backup process tests to determine if the
active .aij file on disk has reached the threshold size of 500
blocks. If not, transaction processing continues normally for one
or more 300-second intervals until the threshold test indicates
that the active .aij file has reached a size of at least 500
blocks. When the .aij file reaches that file size, Oracle RMU
allows existing transactions to continue to write to the active
.aij file but does not allow new transactions to start.
Assuming that the active .aij file contains 550 blocks, Oracle
Rdb moves those 550 blocks to the backup journal file and deletes
them from the active journal file. Then, the backup process
determines if the transactions already in progress have written
more journal records to the active journal file during the backup
operation. If so, Oracle RMU moves those journal records to the
backup file.
After Oracle Rdb completely moves the active journal file,
it truncates the journal file to 0 blocks. Oracle Rdb then
allows new transactions to start and the backup process resumes
threshold testing at 300-second intervals. The backup process
continues until the time and date specified by the Until
qualifier.
Example 2
The following examples show backing up .aij files in sequence.
Note that a number of transactions were committed to the database
between backup operations.
$ RMU/BACKUP/AFTER_JOURNAL/LOG MF_PERSONNEL MFPERS_BKUP_AIJ1.AIJ
%RMU-I-AIJBCKBEG, beginning after-image journal backup operation
%RMU-I-OPERNOTIFY, system operator notification:
Oracle Rdb V7.2 Database DISK1:[DB]MF_PERSONNEL.RDB;1
Event Notification AIJ backup operation started
%RMU-I-AIJBCKSEQ, backing up after-image journal
sequence number 0
%RMU-I-LOGBCKAIJ, backing up after-image journal
AIJ1 at 16:35:53.41
%RMU-I-LOGCREBCK, created backup file
DISK1:[DB]MFPERS_BKUP_AIJ1.AIJ;1
%RMU-I-AIJBCKSEQ, backing up after-image journal
sequence number 1
%RMU-I-LOGBCKAIJ, backing up after-image journal
AIJ2 at 16:35:54.58
%RMU-I-QUIETPT, waiting for database quiet point
%RMU-I-OPERNOTIFY, system operator notification:
Oracle Rdb V7.2 Database DISK1:[DB]MF_PERSONNEL.RDB;1
Event Notification AIJ backup operation completed
%RMU-I-AIJBCKEND, after-image journal backup operation
completed successfully
%RMU-I-LOGAIJJRN, backed up 2 after-image journals
at 16:35:56.40
%RMU-I-LOGAIJBLK, backed up 508 after-image journal blocks
at 16:35:56.41
.
.
.
$ More transactions committed to the database
.
.
.
$ RMU/BACKUP/AFTER_JOURNAL/LOG MF_PERSONNEL MFPERS_BKUP_AIJ2.AIJ
%RMU-I-AIJBCKBEG, beginning after-image journal backup operation
%RMU-I-OPERNOTIFY, system operator notification:
Oracle Rdb V7.2 Database
DISK1:[DB]MF_PERSONNEL.RDB;1 Event Notification
AIJ backup operation started
%RMU-I-AIJBCKSEQ, backing up after-image journal sequence number 2
%RMU-I-LOGBCKAIJ, backing up after-image journal AIJ1 at 16:47:44.66
%RMU-I-LOGCREBCK, created backup file
DISK2:[AIJ]MFPERS_BKUP_AIJ2.AIJ;1
%RMU-I-OPERNOTIFY, system operator notification:
Oracle Rdb V7.2 Database
DISK1:[DB]MF_PERSONNEL.RDB;1 Event Notification
AIJ backup operation completed
%RMU-I-AIJBCKEND, after-image journal backup operation completed
successfully
%RMU-I-LOGAIJJRN, backed up 1 after-image journal at 16:47:46.57
%RMU-I-LOGAIJBLK, backed up 254 after-image journal blocks at
16:47:46.57
Example 3
The following example uses the Edit_Filename qualifier to give
the .aij backup file a meaningful file name. The Rename qualifier
specifies that Oracle RMU should create the backup file by
renaming the current .aij file and by creating a new .aij file
with the same name as the original .aij file.
$ RMU/BACKUP/AFTER_JOURNAL MF_PERSONNEL -
_$ /EDIT_FILENAME=(SEQUENCE,"_",HOUR,"_",MINUTE,"_",MONTH,"_", -
_$ DAY_OF_MONTH) AIJ2/RENAME
$ DIR DISK1:[DB.AIJ2]*.AIJ
Directory DISK1:[DB.AIJ_TWO]
AIJ23_15_46_07_09.AIJ;1
Example 4
The following example shows the syntax to use when you want the
.aij backup file name to default to that previously specified
with the RMU Set After_Journal command. Note that the .aij backup
file name used is that which corresponds to the first .aij file
included in the backup operation.
$ RMU/SET AFTER_JOURNAL MF_PERSONNEL /ENABLE/RESERVE=5 -
_$ /ADD=(NAME=AIJ1, FILE=DISK1:[AIJ]AIJ_ONE, -
_$ BACKUP_FILE=DISK4:[AIJBCK]AIJ1BCK) -
_$ /ADD=(NAME=AIJ2, FILE=DISK2:[AIJ]AIJ_TWO, -
_$ BACKUP_FILE=DISK4:[AIJBCK]AIJ2BCK) -
_$ /ADD=(NAME=AIJ3, FILE=DISK3:[AIJ]AIJ_THREE, -
_$ BACKUP_FILE=DISK4:[AIJBCK]AIJ3BCK)
%RMU-W-DOFULLBCK, full database backup should be done to
ensure future recovery
$ !
$ !Assume backup operation was performed and other database
activity occurs.
$ !Then back up the .aij files:
$ !
$ RMU/BACKUP/AFTER_JOURNAL MF_PERSONNEL.RDB ""
$ !
$ DIR DISK4:[AIJBCK]
Directory DISK4:[AIJBCK]
AIJ1BCK.AIJ;1
Example 5
The following example uses a density value with compression:
RMU/BACKUP/AFTER_JOURNAL /DENSITY=(TK89,COMPACTION)/REWIND -
/LABEL=(LABEL1,LABEL2) MF_PERSONNEL TAPE1:MFP.AIJ, TAPE2:
7.3 – Plan
Executes a backup plan file previously created with the RMU
Backup command (or created manually by the user).
7.3.1 – Description
A backup plan file is created when you execute an RMU Backup
command with the Parallel and List_Plan qualifiers. See Backup
Database for details on creating a plan file and the format of a
plan file.
7.3.2 – Format
(B)0[mRMU/Backup/Plan plan-file
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Execute x Execute
/List_Plan=output-file x None
7.3.3 – Parameters
7.3.3.1 – plan-file-spec
The file specification for the backup plan file. The default file
extension is .plan.
7.3.4 – Command Qualifiers
7.3.4.1 – Execute
Execute
Noexecute
The Execute qualifier specifies that Oracle RMU is to execute
the plan file. The Noexecute qualifier specifies that Oracle RMU
should not execute the plan file, but instead perform a validity
check on the contents of the plan file.
The validity check determines such things as whether the storage
areas names assigned to each worker executor exist.
By default, Oracle RMU executes the backup plan file when the RMU
Backup Plan command is issued.
7.3.4.2 – List Plan
List_Plan=output-file
Specifies that Oracle RMU should generate a new plan file and
write it to the specified output file. This new plan file is
identical to the plan file you specified on the command line (the
"original" plan file) with the following exceptions:
o Any user-added comments in the original plan file do not
appear in the new plan file.
o The new plan file is formatted to match the standard format
for RMU Backup plan files.
7.3.5 – Usage Notes
o To use the RMU Backup Plan command for a database, you must
have the RMU$BACKUP privilege in the root file access control
list (ACL) for the database or the OpenVMS SYSPRV or BYPASS
privilege.
o To execute the RMU Backup Plan command, Oracle SQL/Services
must be installed on your system.
7.3.6 – Examples
Example 1
The following example first creates a plan file by issuing an
RMU Backup command with the Parallel and List_Plan qualifiers.
Oracle RMU does not execute the plan file because the Noexecute
qualifier is specified. The second command issues the RMU Backup
Plan command to execute the plan file created with the RMU Backup
command.
$ ! Create the Backup plan file:
$ !
$ RMU/BACKUP/PARALLEL=(EXEC=4, NODE=(NODE1, NODE2)) -
_$ /LIST_PLAN=(PARTIAL.PLAN)/NOEXECUTE/INCLUDE=(RDB$SYSTEM, -
_$ EMPIDS_LOW, EMPIDS_MID, EMPIDS_OVER, SALARY_HISTORY, EMP_INFO) -
_$ /LABEL=(001, 002, 003, 004, 005, 006, 007, 008, 009) -
_$ /CHECKSUM_VERIFICATION -
_$ MF_PERSONNEL TAPE1:MF_PARTIAL.RBF, TAPE2:, TAPE3, TAPE4
$ !
$ ! Execute the plan file created with the previous command:
$ !
$ RMU/BACKUP/PLAN partial.plan
8 – Checkpoint
When fast commit is enabled, requests that each active database
process on each node flush updated database pages from its buffer
pool to disk.
8.1 – Description
Usually, each process performs a checkpoint operation after a
certain set of thresholds has been exceeded. The RMU Checkpoint
command allows you to spontaneously force each process to perform
a checkpoint operation.
Performing a checkpoint operation is useful for several purposes.
A checkpoint operation with the Wait qualifier causes all updated
database pages to be flushed to disk. A checkpoint operation
also improves the redo performance of the database recovery (DBR)
process (although the per-process parameters should have already
been properly initialized with this goal in mind).
When the Checkpoint command with the Wait qualifier completes
(the system prompt is returned), all active processes have
successfully performed a checkpoint operation.
When the system prompt is returned after you issue the Checkpoint
command with the Nowait qualifier, there is no guarantee that
all active processes have successfully performed a checkpoint
operation.
8.2 – Format
(B)0[mRMU/Checkpoint root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefault[m
x
/[No]Wait[/Until=date-and-time] x /Wait
8.3 – Parameters
8.3.1 – root-file-spec
The root file specification for the database you want to
checkpoint. You can use either a full or partial file
specification, or a logical name.
If you specify only a file name, Oracle Rdb looks for the
database in the current default directory. If you do not specify
a file extension, Oracle Rdb assumes a file extension of .rdb.
8.4 – Command Qualifiers
8.4.1 – Wait
Wait[/Until]
Nowait
Specifies whether or not the system prompt is to be returned
before the checkpoint operation completes.
When you specify the Wait qualifier without the Until qualifier,
the system prompt is not returned to you until all processes have
flushed updated database pages to disk. The Wait qualifier is the
default.
Used with the Wait qualifier, the Until qualifier specifies the
time at which the RMU Checkpoint/Wait command stops waiting
for the checkpoint and returns an error message. If you do not
specify the Until qualifier, the wait is indefinite.
When you specify the Nowait qualifier, the system prompt
is returned immediately, before all processes have flushed
database pages to disk. In addition, when you specify the Nowait
qualifier, there is no guarantee that all processes will flush
their database pages to disk.
The Nowait qualifier is useful when it is more essential that the
system prompt be returned immediately than it is to be certain
that all processes have checkpointed.
8.5 – Usage Notes
o To use the RMU Checkpoint command for a database, you must
have the RMU$BACKUP or RMU$OPEN privilege in the root file
access control list (ACL) for the database or you must have
the OpenVMS WORLD privilege.
o The RMU Checkpoint command is useful only if the database fast
commit feature has been enabled. If the fast commit feature is
disabled, this command does nothing.
For more information on the fast commit feature, see the FAST
COMMIT IS ENABLED section of the SQL ALTER DATABASE statement
in the Oracle Rdb SQL Reference Manual.
8.6 – Examples
Example 1
The following command causes all the active database processes on
all nodes to immediately perform a checkpoint operation:
$ RMU/CHECKPOINT MF_PERSONNEL.RDB
Example 2
The following command requests that all the active database
processes on all nodes perform a checkpoint operation and that
the system prompt be returned to you immediately. In this case,
there is no guarantee that all processes will actually perform a
checkpoint operation.
$ RMU/CHECKPOINT/NOWAIT MF_PERSONNEL.RDB
9 – Close
Closes an open database.
You should always specify the Wait qualifier, unless you are
attempting to recover from some failure. When you specify the
Wait qualifier, Oracle RMU performs all the auxiliary actions
required to close and recover the database clusterwide, and it
does not return the system prompt until those actions have been
completed.
If you use the RMU Close command with the Nowait qualifier, the
database must be open on the node where you issue the command.
Otherwise, you will receive an error message stating that the
database is not known. The system prompt is returned immediately,
but it is only an indication that the database will be closed
as soon as all other users have finished accessing the database.
Therefore, the Wait qualifier is used almost exclusively.
9.1 – Description
The RMU Close command closes an open database. A database root
file is considered open if it has been specified in a previous
RMU Open command or has active users attached to it.
You can close the database immediately by specifying the Abort
qualifier, or you can allow current users to finish their session
by specifying the Noabort qualifier.
If you have specified manual opening for your database (with
the OPEN IS MANUAL clause of the SQL ALTER DATABASE statement),
you must use the RMU Open command to manually open the database
before any users can invoke it and the RMU Close command to
manually close the database.
If you have specified automatic opening for your database
(with the OPEN IS AUTOMATIC clause of the SQL ALTER DATABASE
statement), the RMU Close command affects current database users
only. Current processes are detached from the database but they
and new processes can immediately reattach to the database.
Use the RMU Show Users command to display information about
databases currently in use on your node. Use the RMU Dump Users
command to display information about databases currently in use
on your cluster.
9.2 – Format
(B)0[m RMU/Close root-file-spec [,...]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Abort=option x /Abort=Forcex
/[No]Cluster x See description
/Path x None
/[No]Statistics=Export x /Nostatistics
/[No]Wait x /Nowait
9.3 – Parameters
9.3.1 – root-file-spec
root-file-spec[,...]
An open database root file. The default file extension is .rdb.
9.4 – Command Qualifiers
9.4.1 – Abort
Abort=option
Noabort
Specifies whether to close the database immediately or allow
processes to complete.
The Abort qualifier has two options. Both refer to OpenVMS system
services. The options are as follows:
o Forcex
When you use the Forcex (forced exit) option, recovery units
are recovered and no recovery-unit journal (.ruj) files are
left in the directories. Therefore, the RMU Backup command
works. The option cannot force an exit of a database process
with a spawned subprocess or a suspended or swapped out
process. It aborts batch jobs that are using the database.
Forcex is the default.
o Delprc
When you use the Delprc (delete process) option, recovery
units are not recovered. The .ruj files are left in the
directories to be recovered on the next invocation of the
database. The processes and any subprocesses of all database
users are deleted, thereby deleting the processes from the
database. Therefore, if you attempt to issue an RMU Backup
command, you might receive the following error message:
%RMU-F-MUSTRECDB, database must be closed or recovered
The Delprc and Forcex options are based on OpenVMS system
services $DELPRC and $FORCEX. Refer to the OpenVMS documentation
set for more information.
With the Noabort option, users already attached to the database
can continue, and the root file global sections remain mapped
in the virtual address file contents until all users exit
the database. No new users will be allowed to attach to the
database. When all current images terminate, Oracle RMU closes
the database.
9.4.2 – Cluster
Cluster
Nocluster
Specifying the Cluster qualifier causes Oracle RMU to attempt
to close a database on all nodes in a clustered environment
that currently have the database open. Specifying the Cluster
qualifier is similar to issuing the RMU Close command on every
node in the cluster. Specifying the Nocluster qualifier causes
Oracle RMU to close a database only on the cluster node from
which you issue the RMU Close command.
The default is the Cluster qualifier if you specify the Wait
qualifier. The default is the Nocluster qualifier if you specify
the Nowait qualifier.
The following list describes the behavior of the command when
you use various combinations of the [No]Cluster and [No]Wait
qualifiers together in the same command line:
o Cluster and Wait
When you specify the Cluster and Wait qualifiers, the RMU
Close command closes the database on every node in the
cluster, even if the database is not opened on the node from
which the command is issued.
Because you specified the Cluster and Wait qualifiers, the RMU
Close command closes and recovers the database on every node
in the cluster before the DCL prompt is returned to you.
o Cluster and Nowait
When you specify the Cluster and Nowait qualifiers, the RMU
Close command attempts to close the database on every node in
the cluster. If the database is not opened on the node from
which the Oracle RMU command is issued, the command cannot
close the database on any node, and you receive the following
error message:
%RDMS-F-CANTCLOSEDB, database could not be closed as requested
-RDMS-F-DBNOTACTIVE, database is not being used
%RMU-W-FATALERR, fatal error on DISK1:[USER1]DATABASE.RDB;1
Because you used the Nowait qualifier, the database might
not yet be closed on one or more nodes when the DCL prompt is
returned to you. When you specify the Nowait qualifier, you
can receive SYS-F-ACCONFLICT errors when you attempt to access
a database after you have issued the RMU Close command with
the Cluster and Nowait qualifiers and the DCL prompt has been
returned, but the monitor has not yet closed the database on
all nodes in the cluster.
o Nocluster and Wait
This combination provides the ability to have database
shutdown complete on the local node before Oracle RMU returns
to the DCL prompt.
o Nocluster and Nowait
When you specify the Nocluster and Nowait qualifiers, Oracle
RMU closes the database only on the node from which you issue
the command, regardless of whether or not the database is open
on other nodes.
Because you used the Nowait qualifier, the database might not
yet be closed on the node from which you issued the command
when the DCL prompt is returned to you. With the Nowait
qualifier, you can receive SYS-F-ACCONFLICT errors when you
attempt to access a database after you have issued the RMU
Close command with the Cluster and Nowait qualifiers and the
DCL prompt has been returned, but the monitor has not yet
closed the database on all nodes in the cluster.
9.4.3 – Path
Specifies the full or relative data dictionary path name in which
the definitions reside for the database you want to close.
The Path qualifier is a positional qualifier. Positional
qualifiers operate on specific parameters based on the placement
of the qualifiers in the command line. The path name cannot
include wildcard characters.
9.4.4 – Statistics=Export
Statistics=Export
Nostatistics
Specifies that statistic information is to be saved when the
database is closed. The default is Nostatistics, which indicates
that statistic information is not preserved when the database is
closed.
Clusterwide statistic information is not stored in the statistic
file, which allows you to decide on which nodes the statistic
information should be initially loaded when the database is
opened.
The statistic information is stored in a node-specific database
file located in the same directory as the database root file.
The file has the same name as the root-file-spec, with a default
file extension of .rds. Because the statistic files contain node-
specific information, they cannot be renamed or copied. They can
be deleted if they are no longer needed.
The Statistics=Export qualifier cannot be specified in
conjunction with the Cluster qualifier. To preserve the
statistics information for a database open on a cluster, you
must specifically close the individual nodes.
The RMU Backup command does not save the statistics files. They
are considered temporary files and not part of the database.
9.4.5 – Wait
Wait
Nowait
Specify the Wait qualifier to cause Oracle RMU to close and
recover the database before the system prompt is returned to
you.
The default is the Nowait qualifier. With the Nowait qualifier,
the database might not be closed when the system prompt is
returned to you. You can receive errors when you attempt to
access a database after you issued the RMU Close command and
the system prompt is returned, but before the monitor has closed
the database.
See the Usage Notes for restrictions on using the Wait qualifier.
9.5 – Usage Notes
o To use the RMU Close command for a database, you must have the
RMU$OPEN privilege in the root file access control list (ACL)
for the database or the OpenVMS WORLD privilege.
o To use the Wait qualifier, Oracle RMU requires that the
database be recoverable for correct operation. It must be
possible to attach to the database on a node where it is
opened. There are database recovery failures that preclude
further attaches to the database. When such a failure occurs,
any attempt to attach to the database (for example, with an
SQL ATTACH statement) causes the process to be deleted from
the system. In other words, you are logged out.
In this situation, the RMU Close command with the Wait
qualifier has the same effect as the RMU Close command with
the Cluster and Nowait qualifiers. The operation does not
wait, and it does not close the database unless it is opened
on the node from which you issue the Oracle RMU command.
If you encounter this situation, enter the following command
from a node on which the database is open to close the
database:
$ RMU/CLOSE/CLUSTER/NOWAIT/ABORT=DELPRC
9.6 – Examples
Example 1
When you issue the following command from a node in a cluster,
the Cluster qualifier shuts down the database for the entire
cluster, even if no users are on the node from which you issued
the command. The Wait qualifier causes Oracle RMU to close the
database before the system prompt is returned.
$ RMU/CLOSE/CLUSTER/WAIT MF_PERSONNEL.RDB
Example 2
The following command closes the mf_personnel database in the
[.WORK] directory, all the databases in the [.TEST] directory,
and the databases specified by the path names CDD$TOP.FINANCE and
SAMPLE_DB:
$ RMU/CLOSE DISK1:[WORK]MF_PERSONNEL, CDD$TOP.FINANCE/PATH, -
_$ DISK1:[TEST]*, SAMPLE_DB/PATH
10 – Collect Optimizer Statistics
Collects cardinality and storage statistics for the Oracle
Rdb optimizer. Also collects workload statistics if a workload
profile has been generated.
10.1 – Description
The purpose of collecting optimizer statistics is to maintain
up-to-date statistics that the Oracle Rdb optimizer uses
to determine solution costs and cardinalities during query
optimization.
You can collect cardinality and storage statistics by issuing the
RMU Collect Optimizer_Statistics command. You can direct Oracle
RMU to collect these statistics for particular tables or indexes
by using the Tables, System_Relations, or Indexes qualifiers.
Before you can collect workload statistics, you must first
generate a workload profile with SQL. The following list
describes the general procedure for generating a workload profile
and collecting workload statistics:
1. Enable workload profiling with the WORKLOAD COLLECTION
IS ENABLED clause of the SQL ALTER DATABASE or SQL CREATE
DATABASE statement.
SQL creates a new system table called RDB$WORKLOAD.
2. Execute the queries for which you want the Oracle Rdb
optimizer to have the best possible statistics.
When you execute the queries, the optimizer determines which
groups of columns are important for optimal processing of the
query. These groups of columns are referred to as workload
column groups. Note that a workload column group may actually
contain only one column.
Each set of workload column groups is entered as a row in the
RDB$WORKLOAD system table.
At this point, the only data in the RDB$WORKLOAD system table
are the workload column groups, the tables with which the
column group is associated, and the date they were entered
into the table. No statistics are currently recorded in the
RDB$WORKLOAD system table.
3. In most cases, now you disable workload profiling with the SQL
ALTER DATABASE WORKLOAD COLLECTION IS DISABLED clause.
Queries executed after you disable workload profiling are
not scanned by the Oracle Rdb optimizer for workload column
groups. You can leave the workload profiling enabled if the
same queries are always executed. In such a case, no new rows
are entered into the RDB$WORKLOAD system table. However, if
you anticipate that queries will be executed for which you do
not want workload profiling to be enabled, you need to disable
workload collection.
4. Execute an RMU Collect Optimizer_Statistics command with the
Statistics=(Workload) qualifier.
Oracle RMU reads the RDB$WORKLOAD system table to determine
for which column groups it should collect statistics, and then
collects the statistics.
5. Execute the queries previously profiled again.
The optimizer uses the statistics gathered by Oracle RMU to
make a best effort at optimizing the profiled queries.
The following list provides some guidelines on when to issue the
RMU Collect Optimizer_Statistics command and which Statistics
qualifier options you should use:
o You should enable workload profiling and execute the
RMU Collect Optimizer_Statistics command with the
Statistics=(Workload) qualifier when you introduce new,
complex, frequently used queries as part of your regular work.
o You should execute the RMU Collect Optimizer_Statistics
command with the Statistics=(Storage) qualifier after you
add metadata, such as new tables or indexes, to the database.
In this case, you do not need to reenable workload profiling.
o You should execute the RMU Collect Optimizer_Statistics
command with the Statistics=(Storage, Workload) qualifier
when the data in the database has significantly increased,
decreased, or changed. In this case, you do not need to
reenable workload profiling.
The statistics you can gather with the RMU Collect Optimizer_
Statistics command and a description of how the optimizer uses
these statistics are summarized in Statistics Gathered by the RMU
Collect Optimizer_Statistics Command.
Table 6 Statistics Gathered by the RMU Collect Optimizer_
Statistics Command
Cardinality Statistics
Statistic
Gathered: Definition: Used by Optimizer to:
Table Number of rows in Determine solution cardinality.
Cardinality a table.
Index Number of Estimate the number of index
Cardinality distinct key keys returned.
values in an
index.
Index Prefix Number of Estimate the number of index
Cardinality distinct key keys returned based on a sorted
values in leading index range.
parts of a multi-
segmented B-tree
index.
Workload Statistics
Statistic
Gathered: Definition: Used by Optimizer to:
Column Group Average number Determine strategies for
Duplicity of duplicates equiselections (selections
Factor per distinct with the IS NULL predicate
value in a column or selection predicates with
group. This is an the equals (=) operator),
estimated value. equijoins, grouped aggregation
(for example, the SQL GROUP
BY clause), or projection
operations (for example, the
SQL DISTINCT clause).
Column Group Number of table Estimate the effects of
Null Factor rows with a NULL NULL data on equijoins and
value in at least equiselections (because they
one column of a imply the removal of rows with
column group. NULL values). Also used for
This is an estimating the cardinality of
estimated value. an outer join result.
Storage Statistics
Statistic
Gathered: Definition: Used by Optimizer to:
Average Average number Estimate the cost of descending
Index Depth of levels to the B-tree. (A cross join with
(sorted traverse on a an inner table that is accessed
indexes B-tree descent. by a sorted index involves
only) repetitive B-tree descents.)
Index Key Average number of Improve the cost estimate
Clustering I/Os required of performing an index-only
Factor to read one retrieval for hashed and sorted
index key and indexes.
all associated
dbkeys during
a hashed key
lookup or a B-
tree index scan,
excluding the
B-tree descent.
Index Data Average number Estimate the cost for fetching
Clustering of I/Os required data rows from a sorted index
Factor to fetch data scan or from a hash bucket.
rows using dbkeys
associated with
an index key.
Table Row The average Estimate the cost of performing
Clustering number of I/Os a sequential scan of a table.
Factor required to read
one row during a
sequential scan
of a table.
10.2 – Format
(B)0[mRMU/Collect Optimizer_Statistics root-file
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Exclude_Tables=(table-list) x None
/[No]Indexes[=(index-list)] x /Indexes
/[No]Log[=file-name] x Current DCL verify value
/Row_Count=n x /Row_Count=100
/Statistics[=(options)] x /Statistics
/[No]System_Relations x /Nosystem_Relations
/[No]Tables[=(table-list)] x /Tables
/Transaction_Type=option x /Transaction_Type=Automatic
10.3 – Parameters
10.3.1 – root-file-spec
root-file-spec
Specifies the database for which statistics are to be collected.
The default file type is .rdb.
10.4 – Command Qualifiers
10.4.1 – Exclude Tables
Exclude_Tables
Exclude_Tables=(table-list)
Specifies a list of database tables to be excluded from
statistics collection and update for statistics used by the Rdb
query optimizer. You must specify at least one list. You can
specify an options file in place of a list of tables.
If the Exclude_Tables qualifier is used with the Tables qualifier
in the same RMU Collect Optimizer command, the Exclude_Tables
qualifier takes precedence. If the same table is specified in the
table list for both qualifiers, that table is excluded from the
statistics collection and update.
10.4.2 – Indexes
Indexes
Indexes[=(index-list)]
Noindex
Specifies the index or indexes for which statistics are to be
collected. If you do not specify an index-list, statistics for
all indexes defined for the tables specified with the Tables
qualifier are collected. If you specify an index-list, statistics
are collected only for the named indexes. If you specify
the Noindex qualifier, statistics for the index cardinality,
average index depth, index key clustering factor, and index data
clustering factor are not collected.
Specify the Notable qualifier if you do not want statistics
collected for tables. (Remember, the Tables qualifier without
a table-list is the default.)
The default is the Indexes qualifier without an index-list.
10.4.3 – Log
Log
Log=file-name
Nolog
Specifies how the values calculated for the statistics are to
be logged. Specify the Log qualifier to have the information
displayed to SYS$OUTPUT. Specify the Log=file-spec qualifier
to have the information written to a file. Specify the Nolog
qualifier to prevent display of the information. If you do not
specify any of variation of the Log qualifier, the default is
the current setting of the DCL verify switch. (The DCL SET VERIFY
command controls the DCL verify switch.)
10.4.4 – Row Count
Row_Count=n
Specifies the number of rows that are sent in a single I/O
request when Workload Statistics are collected. You can
experiment to find the value for n that provides the best
performance and memory usage for your database and environment.
As you increase the value of n, you see an increase in
performance at the expense of additional memory and overhead.
The minimum value you can specify for n is 1. The default value
for n is 100.
10.4.5 – Statistics
Statistics
Statistics[=(options)]
Specifies the type of statistics you want to collect for the
items specified with the Tables, System_Relations, and Indexes
qualifiers. If you specify the Statistics qualifier without
an options list, all statistics are collected for the items
specified.
If you specify the Statistics qualifier with an options list,
Oracle RMU collects types of statistics described in the
following list. If you specify more than one option, separate the
options with commas and enclose the options within parenthesis.
The Statistics qualifier options are:
o Cardinality
Collects the table cardinality for the tables specified with
the Tables and System_Relations qualifiers and the index and
index prefix cardinalities for the indexes specified with the
Indexes qualifier. Because cardinalities are automatically
maintained by Oracle Rdb, it is usually not necessary
to collect cardinality statistics using the RMU Collect
Optimizer_Statistics command unless you have previously
explicitly disabled cardinality updates.
o Workload
Collects the Column Group, Duplicity Factor, and Null Factor
workload statistics for the tables specified with the Tables
and System_Relations qualifiers.
o Storage
Collects the following statistics:
- Table Row Clustering Factor for the tables specified with
the Tables qualifier
- Index Key Clustering Factor, the Index Data Clustering
Factor, and the Average Index Depth for the indexes
specified with the Indexes qualifier
See System Tables Used to Store Optimizer Statistics in the
Usage_Notes entry for this command for information on the
columns and tables used in the system relations to store these
statistics.
10.4.6 – System Relations
System_Relations
Nosystem_Relations
Specifies that optimizer statistics are to be collected for
system tables (relations) and their associated indexes.
If you do not specify the System_Relations qualifier, or if you
specify the Nosystem_Relations qualifier, optimizer statistics
are not collected for system tables or their associated indexes.
Specify the Noindex qualifier if you do not want statistics
collected for indexes defined on the system tables.
The default is the Nosystem_Relations qualifier.
10.4.7 – Tables
Tables
Tables[=(table-list)]
Notables
Specifies the table or tables for which statistics are to be
collected. If you specify a table-list, statistics for those
tables and their associated indexes are collected. If you do
not specify a table-list, statistics for all tables and their
associated indexes in the database are collected. If you do
not specify the Table qualifier, statistics for all tables are
collected. If you specify the Notables qualifier, statistics for
for the table cardinality, table row clustering factor, column
group duplicity factor, and column group null factor are not
collected.
Specify the Noindex qualifier if you do not want statistics
collected for indexes.
The Tables qualifier without a table-list is the default.
10.4.8 – Transaction Type
Transaction_Type=option
Allows you to specify the transaction mode for the transactions
used to collect statistics. Valid options are:
o Automatic
o Read_Only
o Noread_Only
You must specify an option if you use this qualifier.
If you do not use any form of this qualifier, the Transaction_
Type=Automatic qualifier is the default. This qualifier specifies
that Oracle RMU is to determine the transaction mode used
to collect statistics. If any storage area in the database
(including those not accessed for collecting statistics) has
snapshots disabled, the transactions used to collect data are set
to read/write mode. Otherwise, the transactions to collect data
are set to read-only mode.
The Transaction_Type=Read_Only qualifier specifies the
transactions used to collect statistics be set to read-only
mode. When you explicitly set the transaction type to read-
only, snapshots need not be enabled for all storage areas in
the database, but must be enabled for those storage areas from
which statistics are collected. Otherwise, you receive an error
and the collect optimizer statistics operation fails.
You might select this option if not all storage areas have
snapshots enabled and you are collecting statistics on objects
that are stored only in storage areas with snapshots enabled. In
this case, using the Transaction_Type=Read_Only qualifier allows
you to collect statistics and impose minimal locking on other
users of the database.
The Transaction_Type=Noread_Only qualifier specifies that the
transactions used to collect statistics be set to read/write
mode. You might select this option if you want to eradicate
the growth of snapshot files that occurs during a read-only
transaction and are willing to incur the cost of increased
locking that occurs during a read/write transaction.
10.5 – Usage Notes
o To use the RMU Collect Optimizer_Statistics command for a
database, you must have the RMU$ANALYZE privilege in the root
file access control list (ACL) for the database or the OpenVMS
SYSPRV or BYPASS privilege.
o When you use the SQL ALTER DATABASE statement to set the
RDB$SYSTEM storage area to read-only access for your database,
the Oracle Rdb system tables in the RDB$SYSTEM storage area
are also set to read-only access. When the Oracle Rdb system
tables are set to read-only access:
o Automatic updates to table and index cardinality are
disabled.
o Manual changes made to the cardinalities to influence the
optimizer are not allowed.
o The I/O associated with the cardinality update is
eliminated.
o For indexes, the cardinality value is the number of unique
entries for an index that allows duplicates. If the index is
unique, Oracle Rdb stores zero for the cardinality, and uses
the table cardinality instead. For tables, the cardinality
value is the number of rows in the table. Oracle Rdb uses
the cardinality values of indexes and tables to influence
decisions made by the optimizer. If the actual cardinality
values of tables and indexes are different from the stored
cardinality values, the optimizer's performance can be
adversely affected.
o As Oracle RMU performs the collect operation, it displays
the maximum memory required to perform the operation. If
the maximum amount required is not available, Oracle RMU
makes adjustments to try to make use of the memory that is
available. However, if after making these adjustments, memory
is still insufficient, the collect operation skips the updates
for the table causing the problem and continues with the
operation. The skipped table is noted in the log file with the
message, "Unable to allocate memory for <table-name>; default
statistics values used."
To avoid this problem, use the OpenVMS System Generation
Utility (SYSGEN) to increase the VIRTUALPAGECNT parameter.
o If you prefer not to update optimizer statistics all at
once, you can divide the work into separate commands. Oracle
Corporation recommends that you collect Cardinality and
Storage statistics in one RMU Collect Optimizer_Statistics
command; and collect Workload statistics in a second command.
o You must decide if the improved performance provided by
enabling and maintaining the workload profile is worth the
cost. Generally speaking, it is worth the cost of maintaining
this table for a stable set of queries that are run on a
regular basis; it is not worth the cost of maintaining this
table when the majority of your queries are ad hoc queries,
each of which uses different access strategies.
For example, if the majority of queries that access the
EMPLOYEES table use the EMPLOYEE_ID as the selection criteria
and the queries are using the same access strategy, you might
want to maintain a workload profile for the EMPLOYEES table.
However, if some queries access the EMPLOYEES table through
the EMPLOYEE_ID, some through the LAST_NAME, and others
through the STATE, in an unpredictable manner, the queries
are using different access strategies for which you probably
do not want to maintain a workload profile.
o Index prefix cardinalities are cumulative values. For example,
suppose an index contains three segments and the first segment
has a cardinality of A; the second has a cardinality of B;
and the third has a cardinality of C. Then the index prefix
cardinality for the first segment is A; the index prefix
cardinality for the second segment is A concatenated with
B (A|B); and the index prefix cardinality for the third
segment is A concatenated with B concatenated with C (A|B|C).
Therefore, the prefix cardinality for last segment in an
index is always equal to the total cardinality for the index.
Likewise, if the index only contains one segment, the index
prefix cardinality is equal to the total cardinality for the
index. In these cases, because the index prefix cardinality
is the same as the total index cardinality, Oracle RMU does
not calculate an index prefix cardinality. Instead, Oracle
RMU stores a value of "0" for the index prefix cardinality
and the optimizer uses the value stored for the total index
cardinality.
o Cardinality statistics are automatically maintained by
Oracle Rdb. Physical storage and Workload statistics are only
collected when you issue an RMU Collect Optimizer_Statistics
command. To get information about the usage of Physical
storage and Workload statistics for a given query, define
the RDMS$DEBUG_FLAGS logical name to be "O". For example:
$ DEFINE RDMS$DEBUG_FLAGS "O"
When you execute a query, if workload and physical statistics
have been used in optimizing the query, you see a line such as
the following in the command output:
~O: Workload and Physical statistics used
o Detected asynchronous prefetch should be enabled to achieve
the best performance of this command. Beginning with Oracle
Rdb V7.0, by default, detected asynchronous prefetch is
enabled for databases created under Oracle Rdb V7.0 or
converted to V7.0. You can determine the setting for your
database by issuing the RMU Dump command with the Header
qualifier.
If detected asynchronous prefetch is disabled, and you do not
want to enable it for the database, you can enable it for your
Oracle RMU operations by defining the following logicals at
the process level:
$ DEFINE RDM$BIND_DAPF_ENABLED 1
$ DEFINE RDM$BIND_DAPF_DEPTH_BUF_CNT P1
P1 is a value between 10 and 20 percent of the user buffer
count.
o You can delete entries from the workload profile with the RMU
Delete Optimizer_Statistics command. See Delete_Optimizer_
Statistics for details.
o You can display entries from the workload profile with the
RMU Show Optimizer_Statistics command. See Show Optimizer_
Statistics for details.
o System Tables Used to Store Optimizer Statistics provides a
summary of the system tables in which statistics gathered by
the RMU Collect Optimizer_Statistics command are stored.
Table 7 System Tables Used to Store Optimizer Statistics
Statistic System Table Name Column Name
Table RDB$RELATIONS RDB$CARDINALITY
Cardinality
Table Row RDB$RELATIONS RDB$ROW_CLUSTER_FACTOR
Clustering
Factor
Column Group RDB$WORKLOAD RDB$DUPLICITY_FACTOR
Duplicity
Factor
Column Group RDB$WORKLOAD RDB$NULL_FACTOR
Null Factor
Index RDB$INDICES RDB$CARDINALITY
Cardinality
Index Prefix RDB$INDEX_ RDB$CARDINALITY
Cardinality SEGMENTS
Average RDB$INDICES RDB$INDEX_DEPTH
Index Depth
(B-Trees
only)
Index Key RDB$INDICES RDB$KEY_CLUSTER_FACTOR
Clustering
Factor
Index Data RDB$INDICES RDB$DATA_CLUSTER_FACTOR
Clustering
Factor
10.6 – Examples
Example 1
The following example collects cardinality statistics for the
EMPLOYEES and JOB_HISTORY tables and their associated indexes.
See the Usage Notes for an explanation for the value "0" for the
index prefix cardinality.
$ RMU/COLLECT OPTIMIZER_STATISTICS mf_personnel.rdb -
_$ /STATISTICS=(CARDINALITY)/TABLES=(EMPLOYEES, JOB_HISTORY) -
_$ /INDEXES=(EMP_LAST_NAME,EMP_EMPLOYEE_ID, EMPLOYEES_HASH, -
_$ JH_EMPLOYEE_ID, JOB_HISTORY_HASH)/LOG
Start loading tables... at 3-JUL-1996 09:35:25.19
Done loading tables.... at 3-JUL-1996 09:35:25.91
Start loading indexes... at 3-JUL-1996 09:35:25.92
Done loading indexes.... at 3-JUL-1996 09:35:26.49
Start collecting btree index stats... at 3-JUL-1996 09:35:28.17
Done collecting btree index stats.... at 3-JUL-1996 09:35:28.23
Start collecting table & hash index stats... at 3-JUL-1996 09:35:28.23
Done collecting table & hash index stats.... at 3-JUL-1996 09:35:28.52
Start calculating stats... at 3-JUL-1996 09:35:28.76
Done calculating stats.... at 3-JUL-1996 09:35:28.76
Start writing stats... at 3-JUL-1996 09:35:30.16
----------------------------------------------------------------------
Optimizer Statistics collected for table : EMPLOYEES
Cardinality : 100
Index name : EMP_LAST_NAME
Index Cardinality : 83
Segment Column Prefix cardinality
LAST_NAME 0
Index name : EMP_EMPLOYEE_ID
Index Cardinality : 100
Segment Column Prefix cardinality
EMPLOYEE_ID 0
Index name : EMPLOYEES_HASH
Index Cardinality : 100
----------------------------------------------------------------------
Optimizer Statistics collected for table : JOB_HISTORY
Cardinality : 274
Index name : JH_EMPLOYEE_ID
Index Cardinality : 100
Segment Column Prefix cardinality
EMPLOYEE_ID 0
Index name : JOB_HISTORY_HASH
Index Cardinality : 100
Done writing stats.... at 3-JUL-1996 09:35:30.83
Example 2
The following example collects storage statistics for the
EMPLOYEES and JOB_HISTORY TABLES and their associated indexes:
$ RMU/COLLECT OPTIMIZER_STATISTICS mf_personnel -
_$ /STATISTICS=(STORAGE)/TABLES=(EMPLOYEES, JOB_HISTORY) -
_$ /INDEXES=(EMP_LAST_NAME,EMP_EMPLOYEE_ID, EMPLOYEES_HASH, -
_$ JH_EMPLOYEE_ID, JOB_HISTORY_HASH)/LOG
Start loading tables... at 3-JUL-1996 10:28:49.39
Done loading tables.... at 3-JUL-1996 10:28:50.30
Start loading indexes... at 3-JUL-1996 10:28:50.30
Done loading indexes.... at 3-JUL-1996 10:28:51.03
Start collecting btree index stats... at 3-JUL-1996 10:28:53.27
Done collecting btree index stats.... at 3-JUL-1996 10:28:53.37
Start collecting table & hash index stats... at 3-JUL-1996 10:28:53.38
Done collecting table & hash index stats.... at 3-JUL-1996 10:28:53.80
Start calculating stats... at 3-JUL-1996 10:28:54.07
Done calculating stats.... at 3-JUL-1996 10:28:54.07
Start writing stats... at 3-JUL-1996 10:28:55.61
----------------------------------------------------------------------
Optimizer Statistics collected for table : EMPLOYEES
Row clustering factor : 0.2550000
Index name : EMP_LAST_NAME
Average Depth : 2.0000000
Key clustering factor : 0.0481928
Data clustering factor : 1.1686747
Index name : EMP_EMPLOYEE_ID
Average Depth : 2.0000000
Key clustering factor : 0.0100000
Data clustering factor : 0.9500000
Index name : EMPLOYEES_HASH
Key clustering factor : 1.0000000
Data clustering factor : 1.0000000
--------------------------------------------------------------------
Optimizer Statistics collected for table : JOB_HISTORY
Row clustering factor : 0.0930657
Index name : JH_EMPLOYEE_ID
Average Depth : 2.0000000
Key clustering factor : 0.0500000
Data clustering factor : 0.9500000
Index name : JOB_HISTORY_HASH
Key clustering factor : 1.0000000
Data clustering factor : 1.0000000
Done writing stats.... at 3-JUL-1996 10:28:56.41
Example 3
The following example enables workload collection with an SQL
ALTER DATABASE statement, executes frequently run queries to
generate a workload profile, collects workload statistics for
the EMPLOYEES and JOB_HISTORY tables (along with their associated
indexes), and then displays the statistics gathered.
The SQL natural left outer join causes the first and third
workload column groups to be created. The SQL DISTINCT clause
causes the second and fourth workload column groups to be
created.
$ ! Enable workload collection:
$ SQL
SQL> ALTER DATABASE FILENAME mf_personnel.rdb
cont> WORKLOAD COLLECTION IS ENABLED;
SQL> --
SQL> -- Execute frequently run SQL queries.
SQL> --
SQL> ATTACH 'FILENAME mf_personnel.rdb';
SQL> SELECT DISTINCT *
cont> FROM JOB_HISTORY NATURAL LEFT OUTER JOIN EMPLOYEES;
.
.
.
SQL> DISCONNECT DEFAULT;
SQL> -- Disable workload collection:
SQL> ALTER DATABASE FILENAME mf_personnel.rdb
cont> WORKLOAD COLLECTION IS DISABLED;
SQL> EXIT;
$
$ ! Direct Oracle RMU to collect statistics for the EMPLOYEES and
$ ! JOB_HISTORY tables.
$ !
$ RMU/COLLECT OPTIMIZER_STATISTICS mf_personnel.rdb -
_$ /TABLE=(EMPLOYEES, JOB_HISTORY)/STATISTICS=(WORKLOAD)/LOG
Start loading tables... at 3-JUL-1996 10:40:00.22
Done loading tables.... at 3-JUL-1996 10:40:00.90
Start collecting workload stats... at 3-JUL-1996 10:40:03.43
Maximum memory required (bytes) = 6810
Done collecting workload stats.... at 3-JUL-1996 10:40:05.03
Start calculating stats... at 3-JUL-1996 10:40:05.32
Done calculating stats.... at 3-JUL-1996 10:40:05.32
Start writing stats... at 3-JUL-1996 10:40:06.91
----------------------------------------------------------------------
Optimizer Statistics collected for table : EMPLOYEES
Workload Column group : EMPLOYEE_ID
Duplicity factor : 1.0000000
Null factor : 0.0000000
Workload Column group : LAST_NAME, FIRST_NAME, MIDDLE_INITIAL,
ADDRESS_DATA_1, ADDRESS_DATA_2, CITY, STATE, POSTAL_CODE, SEX,
BIRTHDAY, STATUS_CODE
Duplicity factor : 1.5625000
Null factor : 0.3600000
----------------------------------------------------------------------
Optimizer Statistics collected for table : JOB_HISTORY
Workload Column group : EMPLOYEE_ID
Duplicity factor : 2.7040000
Null factor : 0.0000000
Workload Column group : EMPLOYEE_ID, JOB_CODE, JOB_START,
JOB_END, DEPARTMENT_CODE, SUPERVISOR_ID
Duplicity factor : 1.5420582
Null factor : 0.3649635
Done writing stats.... at 3-JUL-1996 10:40:07.46
Example 4
The following example collects all statistics (cardinality,
workload, and storage) for all tables and indexes in the database
except system relations. Output is written to the file stats_
nosys.log.
$ RMU/COLLECT OPTIMIZER_STATISTICS mf_personnel.rdb -
_$ /LOG=stats_nosys.log
Example 5
The following example collects all statistics (cardinality,
workload, and storage) for all tables, indexes, and system
relations. Output is written to the file stats_all.log.
$ RMU/COLLECT OPTIMIZER_STATISTICS mf_personnel.rdb/SYSTEM_RELATIONS -
_$ /LOG=stats_all.log
Example 6
In the following example the Employees and Departments tables are
excluded from statistics collection.
$ RMU/COLLECT OPTIMIZER_STATISTICS MF_PERSONNEL /LOG -
_$ /EXCLUDE_TABLES=(EMPLOYEES,DEPARTMENTS)
11 – Convert
Converts any of the following versions (or any of the mandatory
updates to these versions) of Oracle Rdb databases to an Oracle
Rdb release 7.2 database:
o Version 7.0
o Version 7.1
See the Oracle Rdb Installation and Configuration Guide for the
proper backup procedure prior to installing a new release of
Oracle Rdb and converting databases.
NOTE
The following are important issues to consider when you
convert a database:
o A database must be backed up immediately following an
Oracle RMU convert operation.
A database converted using the RMU Convert command may
not be recoverable if a full database backup is not made
immediately after the convert operation completes. If you
attempt to restore a database using a backup file created
prior to the conversion, the database may be left in an
unrecoverable state.
o If after-image journaling is enabled when you issue
the Convert command, Oracle RMU disables after-image
journaling during the convert operation and then does
one of the following, depending on the type of .aij file
or files being employed when the Convert command was
issued:
- If an extensible .aij file was being used, Oracle RMU
creates a new journal for the converted database and
enables after-image journaling.
- If fixed-size .aij files were being used, Oracle RMU
activates the next available fixed-size journal and
enables after-image journaling. If another fixed-
size journal is not available, journaling remains
disabled.
Use only the .aij file (or files) created or activated
during or after the convert operation together with the
backup file you created immediately after the convert
operation to restore and recover your database. Any .aij
files created prior to the Convert operation cannot be
used to recover the converted database.
If you issue an RMU Convert command with the Rollback
qualifier, Oracle RMU disables after-image journaling
and returns the message: RMU-I-CANTENAAIJ. Oracle
Corporation recommends that you back up the database and
enable after-image journaling when the convert operation
completes.
o Growth of the RDB$SYSTEM storage area is normal during
a convert operation. You must be sure that there is
sufficient disk space for the new metadata and the
converted metadata.
During a convert operation Oracle RMU makes an upgraded
copy of the metadata. If the convert operation fails,
the old metadata is available for rolling back. If
you specify the Nocommit qualifier, both copies of
the metadata exist at the same time (to allow a manual
rollback operation). If you specify the Commit qualifier,
the old metadata is deleted once the convert operation
completes successfully.
Read the Description help entry under this command carefully for
important information on converting single-file and multifile
databases.
11.1 – Description
The RMU Convert command operates by creating a converted copy of
the system tables and indexes. This implies that the RDB$SYSTEM
storage area might grow during the conversion, but it is unlikely
that the system tables will be fragmented by the conversion
process.
Because a copy of the system tables is made, the time taken by
the conversion is proportional to the amount of storage allocated
to the system tables, or the number of rows in system tables, or
both. This is typically a few minutes per database. However, if
the database has very large system tables, the conversion can be
costly. If the database has a large number of versions of some
tables, it might be more efficient for you to use the SQL EXPORT
and IMPORT statements to convert the database.
After the conversion, both copies of the system tables are stored
in the database. The Commit qualifier selects the converted copy
and deletes the original copy. The Rollback qualifier selects the
original copy and deletes the converted copy. You can specify
either the Commit or the Rollback qualifier at a later time
if you selected the Nocommit qualifier when the database was
converted. Be aware that as long as Commit or Rollback are not
selected after a Nocommit conversion, extra space will be taken
up in the database to store both versions of the metadata. It
is important to issue the Convert/Commit command after you have
verified that the conversion was successful. (RMU will not let
you convert to a newer version if the previous Convert was never
committed, even if it was years ago.)
While both copies of the system tables exist, the database is
usable under Oracle Rdb release 7.2, but not under the earlier
version. Also, DDL (data definition language) operations to the
database are prohibited to ensure that both copies of the system
tables remain consistent. After you specify either the Commit or
the Rollback qualifier, you can again perform DDL operations on
the database.
If you convert a multifile database created prior to Oracle Rdb
Version 6.1 by using the RMU Convert command with the Nocommit
qualifier and then use the RMU Convert command with the Rollback
qualifier to revert to the prior database structure level,
subsequent verify operations might return an RMU-W-PAGTADINV
warning message. See the Usage_Notes help entry under this
command for details.
11.2 – Format
(B)0[mRMU/Convert database-list
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Commit x /Commit
/[No]Confirm x See description
/Path x None
/Prefix_Collection=option x See description
/Reserve = (Area=n, Aij=n) x See description
/[No]Rollback x /Norollback
11.3 – Parameters
11.3.1 – database-list
The database-list parameter is a list of databases to be
converted. A list item can be either the file specification of
a database root file or a data dictionary path name.
You can use wildcards in the file specification of a database
root file.
You cannot use wildcards in a data dictionary path name.
11.4 – Command Qualifiers
11.4.1 – Commit
Commit
Nocommit
Makes the database conversion permanent. When you specify the
Commit qualifier, the database is converted to an Oracle Rdb
release 7.2 database and cannot be returned to the previous
version. The default is Commit.
When you specify the Nocommit qualifier, you can convert the
database to Oracle Rdb release 7.2 and roll it back to the
previous version at a later time.
Using the Nocommit qualifier is helpful when you want to test
your applications against a new version of Oracle Rdb. In the
event that you find problems, you can roll back to the previous
version. Once you feel confident that your applications work well
with the new version, you should commit the converted database,
otherwise unnecessary space is taken up in the database to store
the obsolete alternate version of the metadata.
11.4.2 – Confirm
Confirm
Noconfirm
Requests user input during the conversion procedure. When you
specify the Confirm qualifier, Oracle RMU asks if you are
satisfied with your database and aij backup files. If the
database being converted has after-image journaling enabled,
Oracle RMU asks if you want to continue and states that after-
image journaling will be temporarily disabled.
11.4.3 – Path
Path
Identifies that the database is being specified by its data
dictionary path name instead of its file specification. The Path
qualifier is a positional qualifier.
11.4.4 – Prefix Collection
Prefix_Collection=option
When you convert a database to release 7.2 from a release
of Oracle Rdb prior to release 7.0, you can use the Prefix_
Collection qualifier to specify that sorted index prefix
cardinality collection be Enabled, Enabled Full, or Disabled
for all sysem and user sorted indexes.
The following options are available for use with the Prefix_
Collection qualifier:
o Disabled
Specifies that index prefix cardinality collection is to be
disabled.
o Enabled
Specifies that default collection is performed. The Oracle
Rdb optimizer collects approximate cardinality values for the
index columns to help in future query optimization.
Enabled Estimate
Specifies that prefix cardinality values for all indexes are
to be estimated.
o Enabled Collect
Specifies that prefix cardinality values for all indexes are
to be collected by calling the RMU Collect command.
o Full Requests that extra I/O be performed, if required, to
ensure that the cardinality values reflect the key value
changes of adjacent index nodes.
o Full=Estimate
Specifies that prefix cardinality values for all indexes are
to be estimated.
o Full=Collect
Specifies that prefix cardinality values for all indexes are
to be collected by calling the RMU Collect command.
11.4.5 – Reserve
Reserve=(Area=n,Aij=n)
Reserves space in the database root file for storage areas or
.aij files, or both. Replace the character n with the number of
storage areas or .aij files for which you want to reserve space.
Note that you cannot reserve areas for a single-file database.
You can reserve .aij files for a single-file database, but once
the database is converted, you cannot alter that reservation
unless you backup and restore the database.
This qualifier is useful if, when you are converting your
database, you anticipate the need for additional storage areas
or .aij files. Because the addition of new storage areas or .aij
files requires that users not be attached to the database, adding
them while the database is being converted minimizes the time
that the database is inaccessible to users.
By default, one .aij file and no storage area files are reserved.
11.4.6 – Rollback
Rollback
Norollback
Returns a database that has been converted to an Oracle Rdb
release 7.2 database (but not committed) to the previous version.
You might decide to return to the previous version of the
database for technical, performance, or business reasons.
The Norollback qualifier prevents you from returning your
converted database to the previous version. The default is the
Norollback qualifier.
If you specify both the Nocommit qualifier and the Rollback
qualifier in the same RMU Convert command, your database is
converted to Oracle Rdb release 7.2 and immediately rolled back
to the previous version when the RMU Convert command is executed.
This qualifier is valid only if you are converting from one of
the following releases: 7.0 or 7.1.
11.5 – Usage Notes
o To use the RMU Convert command for a database, you must have
the RMU$CONVERT or RMU$RESTORE privilege in the root file
access control list (ACL) for the database or the OpenVMS
SYSPRV or BYPASS privilege.
o The RMU Convert command requires read/write access to the
database root file, the RDB$SYSTEM area, and the directory in
which the .ruj file will be entered.
o Oracle Corporation recommends that you update multisegment
index cardinalities as part of, or soon after, the convert
operation completes.
Stored cardinality values can differ from the actual
cardinality values if the RDB$SYSTEM storage area has been
set to read-only access.
If you use the Confirm and Commit qualifiers when you issue
the RMU Convert command, Oracle RMU asks if you want to update
multisegment index cardinalities with actual index values
and provides an estimate on the time it will take to perform
the update. If you choose not to update these cardinalities
with actual values as part of the convert operation, or if
you do not use the Confirm qualifier, Oracle RMU updates
the multisegment index cardinalities with estimated values.
In such a case, you should update the cardinalities with
actual values as soon as possible by issuing an RMU Collect
Optimizer_Statistics command. See Collect_Optimizer_Statistics
for details.
o If the database conversion does not complete (for example,
because of a system failure or an Oracle Rdb monitor
shutdown), you can execute the RMU Convert command again
later. The ability to complete the conversion process later
keeps you from having a half-converted database that is
corrupted.
o If the RDB$SYSTEM storage area attribute is set to read-
only access, the RMU Convert command proceeds to reset the
attribute to read/write, convert the database and then reset
the attribute to read-only when the conversion is complete. If
the RDB$SYSTEM storage area is located on a device that cannot
be written to, the database conversion fails and returns an
error message.
o You are prompted to specify the Prefix_Collection parameters
if the following conditions are true:
o The Prefix_Collection qualifier is not specified.
o The RMU Convert process is not running as a batch job.
o The Noconfirm qualifier is not specified.
As a response to the prompt, you can enter "E(NABLE)" for
the equivalent of Prefix_Collection=Enabled, "F(ULL)" for
the equivalent of Prefix_Collection=Full, "D(ISABLE)" for the
equivalent of Prefix_Collection=Disabled, or the default of
"I(GNORE)" if you do not want to change any prefix cardinality
settings.
11.6 – Examples
Example 1
The first command in the following example converts an Oracle Rdb
release 7.0 database with an extensible .aij file to an Oracle
Rdb release 7.2 database. Because the Nocommit qualifier is
specified in the first command, you can roll back the converted
database (the Oracle Rdb release 7.2 database) to the original
Oracle Rdb release 7.0 database.
After-image journaling is disabled while the database is being
converted. After the database is converted, a new extensible .aij
file is created and after-image journaling is enabled again. Note
that .aij files are version-specific. You should perform a full
backup operation after a conversion because the old version and
the new version of the .aij file are incompatible.
In the second command, the converted database is rolled back to
the original database.
$RMU/CONVERT/CONFIRM/NOCOMMIT MF_PERSONNEL.RDB
%RMU-I-RMUTXT_000, Executing RMU for Oracle Rdb V7.2-00
Are you satisfied with your backup of
DISK1:[TESTS]MF_PERSONNEL.RDB;1
and your backup of any associated .aij files [N]? Y
%RMU-I-AIJ_DISABLED, after-image journaling is being disabled
temporarily for the Convert operation
%RMU-I-LOGCONVRT, database root converted to current structure level
%RMU-S-CVTDBSUC, database DISK1:[TESTS]MF_PERSONNEL.RDB;1 successfully
converted from version V7.0 to V7.2
%RMU-I-LOGCREAIJ, created after-image journal file
DISK1:[TESTS]BACKUP_AFTER1.AIJ;2
%RMU-I-LOGMODSTR, activated after-image journal "AFTER1"
%RMU-W-DOFULLBCK, full database backup should be done to ensure future recovery
$RMU/CONVERT/ROLLBACK MF_PERSONNEL.RDB
%RMU-I-RMUTXT_000, Executing RMU for Oracle Rdb V7.2-00
Are you satisfied with your backup of
DISK1:[TESTS]MF_PERSONNEL.RDB;1 and your backup of
any associated .aij files [N]? Y
%RMU-I-AIJ_DISABLED, after-image journaling is being disabled
temporarily for the Convert operation
%RMU-I-LOGCONVRT, database root converted to current structure level
%RMU-I-CVTROLSUC, CONVERT rolled-back for DISK1:[TESTS]MF_PERSONNEL.RDB;1
to version V7.0
%RMU-I-CANTENAAIJ, The JOURNAL is now DISABLED. RMU CONVERT can not enable
the JOURNAL for previous versions. You must do this manually.
%RMU-W-DOFULLBCK, full database backup should be done to ensure future recovery
Example 2
This example is the same as Example 1, except fixed-size .aij
journals are being employed at the time of the conversion. The
first command in this example converts an Oracle Rdb release
7.1 database with fixed-size .aij files to an Oracle Rdb release
7.2 database. Because the Nocommit qualifier is specified in
the first command, you can roll back the converted database (the
Oracle Rdb release 7.2 database) to the original Oracle Rdb V7.1
database.
After-image journaling is disabled while the database is being
converted. After the database is converted, Oracle RMU activates
the next fixed-size .aij file and enables after-image journaling.
Note that .aij files are version specific. You should perform
a full backup operation after a conversion because the old .aij
files are incompatible with the newly converted database.
In the second command, the converted database is rolled back to
the original database.
$RMU/CONVERT/CONFIRM/NOCOMMIT MF_PERSONNEL.RDB
%RMU-I-RMUTXT_000, Executing RMU for Oracle Rdb V7.2-00
Are you satisfied with your backup of DISK1:[TESTS]MF_PERSONNEL.RDB;1
and your backup of any associated .aij files [N]? Y
%RMU-I-AIJ_DISABLED, after-image journaling is being disabled
temporarily for the Convert operation
%RMU-I-LOGCONVRT, database root converted to current structure level
%RMU-S-CVTDBSUC, database DISK1:[TESTS]MF_PERSONNEL.RDB;1 successfully
converted from version V7.1 to V7.2
%RMU-I-LOGMODSTR, activated after-image journal "AFTER2"
%RMU-W-DOFULLBCK, full database backup should be done to ensure future recovery
$RMU/CONVERT/ROLLBACK MF_PERSONNEL.RDB
%RMU-I-RMUTXT_000, Executing RMU for Oracle Rdb V7.2-00
Are you satisfied with your backup of
DISK1:[TESTS]MF_PERSONNEL.RDB;1 and your backup of
any associated .aij files [N]? Y
%RMU-I-AIJ_DISABLED, after-image journaling is being disabled
temporarily for the Convert operation
%RMU-I-LOGCONVRT, database root converted to current structure level
%RMU-I-CVTROLSUC, CONVERT rolled-back for
DISK1:[TESTS]MF_PERSONNEL.RDB;1 to version V7.1
%RMU-I-CANTENAAIJ, The JOURNAL is now DISABLED. RMU CONVERT can not
enable the JOURNAL for previous versions. You must do this manually.
%RMU-W-DOFULLBCK, full database backup should be done to ensure future recovery
Example 3
The following command converts all the databases in DISK1:[RICK]
and its subdirectories and also the SPECIAL_DB database that
is identified by its data dictionary path name. The Noconfirm
qualifier is specified, so Oracle RMU does not request user
input. The Nocommit qualifier is not specified, so the default
qualifier, Commit, is used by default and the converted databases
cannot be rolled back.
$ RMU/CONVERT/NOCONFIRM DISK1:[RICK...]*.RDB,CDD$TOP.RICK.SPECIAL_DB -
_$ /PATH
Example 4
The following command converts an Oracle Rdb release 7.0 database
to release 7.2. In addition, it reserves space in the database
root file of the converted database for four .aij files. After-
image journaling is not enabled at the time the Convert command
is issued.
$RMU/CONVERT/CONFIRM/RESERVE=(AIJ=4)/COMMIT MF_PERSONNEL
%RMU-I-RMUTXT_000, Executing RMU for Oracle Rdb V7.2-00
Are you satisfied with your backup of DISK1:[TESTS]MF_PERSONNEL.RDB;1
and your backup of any associated .aij files [N]? Y
%RMU-I-LOGCONVRT, database root converted to current structure level
%RMU-W-DOFULLBCK, full database backup should be done to ensure future recovery
%RMU-S-CVTDBSUC, database DISK1:[TESTS]MF_PERSONNEL.RDB;1 successfully
converted from version V7.0 to V7.2
Example 5
The following example shows how the contents of a batch file
might look if you were to issue the RMU Convert command with the
Confirm qualifier from a batch job.
$ RMU/CONVERT/COMMIT/CONFIRM USER1:[COLLECT.V71DB]MF_PERSONNEL
Y
Y
12 – Copy Database
Permits you to copy a database.
12.1 – Description
The RMU Copy_Database command allows you to modify certain area
parameters when the copy operation is performed. All the files
are processed simultaneously during the copy operation. The copy
operation's performance is similar to that of the RMU Backup
command. The RMU Copy_Database command eliminates the need for
intermediate storage media.
NOTE
You must perform a full and complete Oracle RMU backup
operation immediately after the Copy_Database operation
completes to ensure that the database can be properly
restored after a database failure or corruption.
Also note that if you do not specify either the After_
Journal qualifier or the Aij_Options qualifier when you
issue the RMU Copy_Database command, after-image journaling
is disabled for the database copy and no .aij files are
associated with the database copy.
12.2 – Format
(B)0[m[7mR[mMU/Copy_Database root-file-spec [storage-area-list]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]After_Journal[=file-spec] x See description
/[No]Aij_Options=journal-opts-file x See description
/[No]Cdd_Integrate x Nocdd_Integrate
/[No]Checksum_Verification x /Checksum_Verification
/Close_Wait=n x See description
/Directory=directory-spec x None
/[No]Duplicate x /Noduplicate
/Global_Buffers=global-buffer-options x Current value
/Local_Buffers=local-buffer-options x Current value
/Lock_Timeout=n x See description
/[No]Log x Current DCL verify value
/Nodes_Max=n x Current value
/[No]Online x /Noonline
/Open_Mode={Automatic|Manual} x Current value
/Option=file-spec x None
(B)0[m/Page_Buffers=n x n=33
/Path=cdd-path x Existing value
/[No]Quiet_Point x /Quiet_Point
/Root=file-spec x None
/Transaction_Mode=(mode-list) x /Transaction_Mode=Current
/Threads=n x /Threads=10
/Users_Max=n x Current value
[4mFile[m [4mor[m [4mArea[m [4mQualifier[m x [4mDefaults[m
x
/Blocks_Per_Page=n x None
/Extension={Disable | Enable } x Current value
/File=file-spec x None
/Read_Only x Current value
/Read_Write x Current value
/Snapshots=(Allocation=n,File=file-spec) x None
/[No]Spams x Current value
/Thresholds=(n,n,n) x None
12.3 – Parameters
12.3.1 – root-file-spec
The name of the database root file for the database you want to
copy.
12.3.2 – storage-area-list
The name of one or more storage areas whose parameters you are
changing. The storage-area-list parameter is optional. Unless you
are using the RMU Copy_Database command to modify the parameters
of one or more storage areas, you should not specify any storage
area names.
12.4 – Command Qualifiers
12.4.1 – After Journal
After_Journal[=file-spec]
Noafter_Journal
NOTE
This qualifier is maintained for compatibility with versions
of Oracle Rdb prior to Version 6.0. You might find it more
useful to specify the Aij_Options qualifier, unless you are
interested in creating an extensible .aij file only.
Specifies how Oracle RMU is to handle after-image journaling and
.aij file creation, using the following rules:
o If you specify the After_Journal qualifier and provide a file
specification, Oracle RMU enables journaling and creates a new
extensible after-image journal (.aij) file for the database
copy.
o If you specify the After_Journal qualifier but you do not
provide a file specification, Oracle RMU enables after-image
journaling and creates a new extensible .aij file for the
database copy with the same name as, but a different version
number from, the .aij file for the database being copied.
o If you specify the Noafter_Journal qualifier, Oracle RMU
disables after-image journaling and does not create a new
.aij file.
o If you do not specify an After_Journal, Noafter_Journal,
Aij_Options, or Noaij_Options qualifier, Oracle RMU disables
after-image journaling and does not create a new .aij file.
You can specify only one, or none, of the following after-image
journal qualifiers in a single RMU Copy_Database command: After_
Journal, Noafter_Journal, Aij_Options, or Noaij_Options.
You cannot use the After_Journal qualifier to create fixed-size
.aij files; use the Aij_Options qualifier.
12.4.2 – Aij Options
Aij_Options=journal-opts-file
Noaij_Options
Specifies how Oracle RMU is to handle after-image journaling and
.aij file creation, using the following rules:
o If you specify the Aij_Options qualifier and provide a
journal-opts-file, Oracle RMU enables journaling and creates
the .aij file or files you specify for the database copy.
If only one .aij file is created for the database copy, it
will be an extensible .aij file. If two or more .aij files
are created for the database copy, they will be fixed-size
.aij files (as long as at least two .aij files are always
available).
o If you specify the Aij_Options qualifier, but do not provide a
journal-opts-file, Oracle RMU disables journaling and does not
create any new .aij files.
o If you specify the Noaij_Options qualifier, Oracle RMU
disables journaling and does not create any new .aij files.
o If you do not specify an After_Journal, Noafter_Journal,
Aij_Options, or Noaij_Options qualifier, Oracle RMU disables
after-image journaling and does not create a new .aij file.
You can only specify one, or none, of the following after-image
journal qualifiers in a single Oracle RMU command: After_Journal,
Noafter_Journal, Aij_Options, Noaij_Options.
See Show After_Journal for information on the format of a
journal-opts-file.
12.4.3 – Cdd Integrate
Cdd_Integrate
Nocdd_Integrate
Integrates the metadata from the root (.rdb) file of the database
copy into the data dictionary (assuming the data dictionary is
installed on your system).
If you specify the Nocdd_Integrate qualifier, no integration
occurs during the copy operation.
You might want to delay integration of the database metadata
with the data dictionary until after the copy operation finishes
successfully.
You can use the Nocdd_Integrate qualifier even if the DICTIONARY
IS REQUIRED clause was used when the database being copied was
defined.
The Cdd_Integrate qualifier integrates definitions in one
direction only-from the database file to the dictionary. The
Cdd_Integrate qualifier does not integrate definitions from the
dictionary to the database file.
The Nocdd_Integrate qualifier is the default.
12.4.4 – Checksum Verification
Checksum_Verification
Nochecksum_Verification
Requests that the page checksum be verified for each page copied.
The default is to perform this verification.
The Checksum_Verification qualifier uses significant CPU
resources but can provide an extra measure of confidence in the
quality of the data being copied. For offline copy operations,
the additional CPU cost of using the Checksum_Verification
qualifier might not be justified unless you are experiencing
or have experienced disk, HSC, or CI port hardware problems. One
symptom of these problems is pages being logged to the corrupt
page table (CPT).
For online copy operations, use of the Checksum_Verification
qualifier offers an additional level of data security when the
database employs disk striping or RAID (redundant arrays of
inexpensive disks) technology. These technologies fragment data
over several disk drives, and use of the Checksum_Verification
qualifier permits Oracle RMU to detect the possibility that
the data it is reading from these disks has been only partially
updated. If you use either of these technologies, you should use
the Checksum_Verification qualifier.
Note, however, that if you specify the Nochecksum qualifier, and
undetected corruptions exist in your database, the corruptions
are included in the copied file. Such a corruption might be
difficult to recover from, especially if it is not detected until
weeks or months after the copy operation is performed.
Overall, Oracle Corporation recommends that you use the Checksum_
Verification qualifier with all copy operations where integrity
of the data is essential.
12.4.5 – Close Wait=n
Specifies a wait time of n minutes before Oracle RMU
automatically closes the database. You must supply a value for
n.
In order to use this qualifier, the Open_Mode qualifier on the
RMU Copy_Database command line must be set to Automatic.
12.4.6 – Directory
Directory=directory-spec
Specifies the default destination for the copied database files.
Note that if you specify a file name or file extension, all
copied files are given that file name or file extension. There
is no default directory specification for this qualifier.
See the Usage Notes for information on how this qualifier
interacts with the Root, File, and Snapshot qualifiers and for
warnings regarding copying database files into a directory owned
by a resource identifier.
If you do not specify this qualifier, Oracle RMU attempts to copy
all the database files (unless they are qualified with the Root,
File, or Snapshot qualifier) to their current location.
12.4.7 – Duplicate
Duplicate
Noduplicate
Causes the RMU Copy_Database command to generate a new database
with the same content, but with a different identity from that
of the original database. For this reason, .aij files cannot be
interchanged between the original and the duplicate database.
This qualifier creates copies of your databases that are expected
to evolve independently in time. In this case, being able to
exchange .aij files might be a security breach, and a likely
source of corruption.
A duplicate database has the same contents as the original
database, but not the same identity. A database copied with
the Noduplicate qualifier is an exact replica of the original
database in every way and, therefore, .aij files can be
interchanged between the original and duplicate database.
The default is the Noduplicate qualifier.
12.4.8 – Global Buffers
Global_Buffers=global-buffer-options
Allows you to change the default global buffer parameters when
you copy a database. The following options are available:
o Disabled
Use this option to disable global buffering for the copy of
the original database.
o Enabled
Use this option to enable global buffering for the copy of the
original database. You cannot specify both the Disabled and
Enabled option in the same RMU Copy_Database command with the
Global_Buffers qualifier.
o Total=total-buffers
Use this option to specify the number of buffers available for
all users.
o User_Limit=buffers-per-user
Use this option to specify the maximum number of buffers
available to each user.
If you do not specify a global buffers option, the database is
copied with the values that are in effect for the database you
are copying.
When you specify two or more options with the Global_Buffers
qualifier, use a comma to separate each option and enclose the
list of options in parentheses.
12.4.9 – Local Buffers
Local_Buffers=local-buffer-options
Allows you to change the default local buffer parameters when you
copy a database. The following options are available:
o Number=number-buffers
Use this option to specify the number of local buffers that
will be available for all users. You must specify a number
between 2 and 32,767 for the number-buffers parameter.
o Size=buffer-blocks
Use this option to specify the size (specified in blocks) for
each buffer. You must specify a number between 2 and 64 for
the buffer-blocks parameter.
If you specify a value smaller than the size of the largest
page defined, Oracle RMU automatically adjusts the size of
the buffer to hold the largest page defined. For example, if
you specify the Local_Buffers=Size=8 qualifier and the largest
page size for the storage areas in your database is 64 blocks,
Oracle RMU automatically interprets the Local_Buffers=Size=8
qualifier as though it were a Local_Buffers=Size=64 qualifier.
Take great care when selecting a buffer size; a poor choice
causes performance to suffer greatly.
The value specified for the buffer-blocks parameter determines
the number of blocks for each buffer, regardless of whether
local buffering or global buffering is enabled for the
database.
If you do not specify a Local_Buffers option, the database is
copied with the values that are in effect for the database you
are copying.
12.4.10 – Lock Timeout
Lock_Timeout=n
Specifies a timeout interval or maximum time in seconds to
wait for the quiet-point lock and any other locks needed when
the operation is performed online. When you specify the Lock_
Timeout=seconds qualifier, you must specify the number of seconds
to wait for the quiet-point lock. If the time limit expires, an
error is signaled and the copy operation fails.
When the Lock_Timeout=seconds qualifier is not specified, the
copy operation waits indefinitely for the quiet-point lock and
any other locks needed during an online copy operation.
The Lock_Timeout=seconds qualifier is ignored for offline copy
operations.
12.4.11 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request log output and
the Nolog qualifier to prevent it. If you specify neither, the
default is the current setting of the DCL verify switch. (The DCL
SET VERIFY command controls the DCL verify switch.)
12.4.12 – Nodes Max
Nodes_Max=n
Specifies a new value for the database maximum node count
parameter for the database copy. The default is to leave the
value unchanged.
12.4.13 – Online
Online
Noonline
Specifies that the copy database operation be performed while
other users are attached to the database. The areas to be copied
are locked for read-only access, so the operation is compatible
with all but exclusive access.
The default is the Noonline qualifier.
12.4.14 – Open Mode
Open_Mode=Automatic
Open_Mode=Manual
Allows you to change the mode for opening a database when
you copy a database. When you specify the Open_Mode=Automatic
qualifier, users can invoke the database copy immediately after
it is copied. If you specify the Open_Mode=Manual qualifier, an
RMU Open command must be used to open the database before users
can invoke the database copy.
The Open_Mode qualifier also specifies the mode for closing a
database. If you specify Open_Mode=Automatic, you can also use
the Close_Wait qualifier to specify a time in minutes before the
database is automatically closed.
If you do not specify the Open_Mode qualifier, the database is
copied with the open mode that is in effect for the database
being copied.
12.4.15 – Option
Option=file-spec
Specifies an options file containing storage area names, followed
by the storage area qualifiers that you want applied to that
storage area. Do not separate the storage area names with
commas. Instead, put each storage area name on a separate line
in the file. The storage area qualifiers that you can include
in the options file are: Blocks_Per_Page, File, Snapshots, and
Thresholds.
You can use the DCL line continuation character, a hyphen (-),
or the comment character (!) in the options file. There is no
default for this qualifier. Example 6 in the Examples entry under
this command shows an options file and how to specify it on the
Oracle RMU command line.
If the Option qualifier is specified, the storage-area-list
parameter is ignored.
12.4.16 – Page Buffers
Page_Buffers=n
Specifies the number of buffers to be allocated for each database
file to be copied. The number of buffers used is twice the
number specified; half are used for reading the file and half
for writing the copy. Values specified with the Page_Buffers
qualifier can range from 1 to 5. The default value is 3. Larger
values might improve performance, but they increase memory use.
12.4.17 – Path
Path=cdd-path
Specifies a data dictionary path into which the definitions of
the database copy will be integrated. If you do not specify the
Path qualifier, Oracle RMU uses the CDD$DEFAULT logical name
value of the user who enters the RMU Copy_Database command.
If you specify a relative path name, Oracle Rdb appends the
relative path name you enter to the CDD$DEFAULT value. If the
cdd-path parameter contains nonalphanumeric characters, you must
enclose it within quotation marks ("").
Oracle Rdb ignores the Path qualifier if you use the Nocdd_
Integrate qualifier or if the data dictionary is not installed
on your system.
12.4.18 – Quiet Point
Quiet_Point
Noquiet_Point
Allows you to specify that a database copy operation is to occur
either immediately or when a quiet point for database activity
occurs. A quiet point is defined as a point where no active
update transactions are in progress in the database.
When you specify the Noquiet_Point qualifier, Oracle RMU proceeds
with the copy operation as soon as the RMU Copy_Database command
is issued, regardless of any update transaction activity in
progress in the database. Because Oracle RMU must acquire
concurrent-read locks on all physical and logical areas, the
copy operation fails if there are any active transactions with
exclusive locks on a storage area. However, once Oracle RMU has
successfully acquired all concurrent-read storage area locks, it
should not encounter any further lock conflicts. If a transaction
that causes Oracle Rdb to request exclusive locks is started
while the copy operation is proceeding, that transaction either
waits or gets a lock conflict error, but the copy operation
continues unaffected.
If you intend to use the Noquiet_Point qualifier with a copy
procedure that previously specified the Quiet_Point qualifier
(or did not specify either the Quiet_Point or Noquiet_Point
qualifier), you should examine any applications that execute
concurrently with the copy operation. You might need to modify
your applications or your copy procedure to handle the lock
conflicts that can occur when you specify the Noquiet_Point
qualifier.
When you specify the Quiet_Point qualifier, the copy operation
begins when a quiet point is reached. Other update transactions
issued after the database copy operation begins are prevented
from executing until after the root file for the database has
been copied (copying of the database storage areas begins after
the root file is copied).
The default is the Quiet_Point qualifier.
12.4.19 – Root
Root=file-spec
Requests that the database root file be copied to the specified
location.
See the Usage Notes for information on how this qualifier
interacts with the Directory, File, and Snapshot qualifiers.
12.4.20 – Transaction Mode=(mode-list)
Transaction_Mode=(mode-list)
Sets the allowable transaction modes for the database root file
created by the copy operation. The mode-list can include one or
more of the following transaction modes:
o All - Enables all transaction modes
o Current - Enables all transaction modes that are set for the
source database. This is the default transaction mode.
o None - Disables all transaction modes
o [No]Batch_Update
o [No]Exclusive
o [No]Exclusive_Read
o [No]Exclusive_Write
o [No]Protected
o [No]Protected_Read
o [No]Protected_Write
o [No]Read_Only
o [No]Read_Write
o [No]Shared
o [No]Shared_Read
o [No]Shared_Write
Your copy operation must include the database root file.
Otherwise, RMU returns the CONFLSWIT error when you issue an
RMU Copy_Database command with the Transaction_Mode qualifier.
If you specify more than one transaction mode in the mode-list,
enclose the list in parenthesis and separate the transaction
modes from one another with a comma. Note the following:
o When you specify a negated transaction mode such as
Noexclusive_Write, it indicates that exclusive write is not
an allowable access mode for the copied database.
o If you specify the Shared, Exclusive, or Protected transaction
mode, Oracle RMU assumes you are referring to both reading and
writing in that transaction mode.
o No mode is enabled unless you add that mode to the list, or
you use the All option to enable all transaction modes.
o You can list one transaction mode that enables or disables a
particular mode followed by another that does the opposite.
For example, Transaction_Mode=(Noshared_Write, Shared) is
ambiguous because the first value disables Shared_Write access
and the second value enables Shared_Write access. Oracle
RMU resolves the ambiguity by first enabling the modes as
specified in the modes-list and then disabling the modes as
specified in the modes-list. The order of items in the list is
irrelevant. In the example presented previously, Shared_Read
is enabled and Shared_Write is disabled.
12.4.21 – Threads=number
Threads=number
Specifies the number of reader threads to be used by the copy
process.
RMU creates so called internal 'threads' of execution to read
data from one specific storage area. Threads run quasi-parallel
within the process executing the RMU image. Each thread generates
its own I/O load and consumes resources like virtual address
space and process quotas (e.g. FILLM, BYTLM). The more threads,
the more I/Os can be generated at one point in time and the more
resources are needed to accomplish the same task.
Performance increases with more threads due to parallel
activities which keeps disk drives busier. However, at a certain
number of threads, performance suffers because the disk I/O
subsystem is saturated and I/O queues build up for the disk
drives. Also the extra CPU time for additional thread scheduling
overhead reduces the overall performance. Typically 2-5 threads
per input disk drive are sufficient to drive the disk I/O
susbsystem at its optimum. However, some controllers may be
able to handle the I/O load of more threads, for example disk
controllers with RAID sets and extra cache memory.
In a copy operation, one thread moves the data of one storage
area at a time. If there are more storage areas to be copied than
there are threads, then the next idle thread takes on the next
storage area. Storage areas are copied in order of the area size
- largest areas first. This optimizes the overall elapsed time
by allowing other threads to copy smaller areas while an earlier
thread is still working on a large area. If no threads qualifier
is specified, then 10 threads are created by default. The minimum
is 1 thread and the maximum is the number of storage areas to be
copied. If the user specifies a value larger than the number of
storage areas, then RMU silently limits the number of threads to
the number of storage areas.
For a copy operation, you can specify a threads number as low as
1. Using a threads number of 1 generates the smallest system
load in terms of working set usage and disk I/O load. Disk
I/O subsystems most likely can handle higher I/O loads. Using
a slightly larger value than 1 typically results in faster
execution time.
12.4.22 – Users Max
Users_Max=n
Specifies a new value for the database maximum user count
parameter.
The default is to use the same value as is in effect for the
database being copied.
12.4.23 – Blocks Per Page
Blocks_Per_Page=n
Specifies a new page size for the storage area to which it is
applied. You cannot decrease the page size of a storage area, and
you cannot change the size of a storage area with a uniform page
format.
You might want to increase the page size in storage areas
containing hash indexes that are close to full. By increasing
the page size in such a situation, you prevent the storage area
from extending.
The Blocks_Per_Page qualifier is a positional qualifier.
12.4.24 – Extension
Extension=Disable
Extension=Enable
Allows you to change the automatic file extension attribute for a
storage area when you copy a database.
Use the Extension=Disable qualifier to disable automatic file
extensions for a storage area.
Use the Extension=Enable qualifier to enable automatic file
extensions for a storage area.
If you do not specify the Extension=Disable or the
Extension=Enable qualifier, the storage areas are copied with
the automatic file extension attributes that are in effect for
the database being copied.
The Extension qualifier is a positional qualifier.
12.4.25 – File
File=file-spec
Requests that the storage area to which this qualifier is applied
be copied to the specified location.
See the Usage Notes for information on how this qualifier
interacts with the Root, Snapshot, and Directory qualifiers and
for warnings regarding copying database files into a directory
owned by a resource identifier.
The File qualifier is a positional qualifier. This qualifier is
not valid for single-file databases.
12.4.26 – Read Only
Use the Read_Only qualifier to change a read/write storage area
or a write-once storage area to a read-only storage area.
If you do not specify the Read_Only or Read_Write qualifier, the
storage areas are copied with the read/write attributes that are
currently in effect for the database being copied.
This is a positional qualifier.
12.4.27 – Read Write
Use the Read_Write qualifier to change a read-only storage area
or a write-once storage area to a read/write storage area.
If you do not specify the Read_Only or Read_Write qualifier, the
storage areas are copied with the read/write attributes that are
currently in effect for the database being copied.
This is a positional qualifier.
12.4.28 – Snapshots
Snapshots=(Allocation=n,File=file-spec)
If you specify the Allocation parameter, specifies the snapshot
file allocation size in n pages for a copied area. If you specify
the File parameter, specifies a new snapshot file location for
the copied storage area to which it is applied.
You can specify the Allocation parameter only, the File parameter
only, or both parameters; however, if you specify the Snapshots
qualifier, you must specify at least one parameter.
The Snapshots qualifier is a positional qualifier.
See the Usage Notes for information on how this qualifier
interacts with the Root, File, and Directory qualifiers.
12.4.29 – Spams
Spams
Nospams
Specifies whether to enable the creation of space area management
(SPAM) pages or disable the creation of SPAM pages (Nospams) for
specified storage areas. This qualifier is not permitted with a
storage area that has a uniform page format.
When SPAM pages are disabled in a read/write storage area, the
SPAM pages are initialized but they are not updated.
The Spams qualifier is a positional qualifier.
12.4.30 – Thresholds
Thresholds=(n,n,n)
Specifies new SPAM thresholds for the storage area to which it is
applied (for a mixed page format storage area). The thresholds of
a storage area with a uniform page format cannot be changed.
See the Oracle Rdb7 Guide to Database Performance and Tuning for
information on setting SPAM thresholds.
The Thresholds qualifier is a positional qualifier.
12.5 – Usage Notes
o To use the RMU Copy_Database command for a database, you must
have the RMU$COPY privilege in the root file access control
list (ACL) for the database to be copied or the OpenVMS SYSPRV
or BYPASS privilege.
o When you copy a database into a directory owned by a resource
identifier, the ACE for the directory is applied to the
database root file ACL first, and then the Oracle RMU ACE is
added. This method is employed to prevent database users from
overriding OpenVMS file security. However, this can result in
a database which you consider yours, but to which you have no
Oracle RMU privileges to access. See the Oracle Rdb Guide to
Database Maintenance for details.
o The RMU Copy_Database command provides four qualifiers,
Directory, Root, File, and Snapshots, that allow you to
specify the target for the copied files. The target can be
just a directory, just a file name, or a directory and file
name.
If you use all or some of these four qualifiers, apply them as
follows:
- Use the Root qualifier to indicate the target for the copy
of database root file.
- Use local application of the File qualifier to specify the
target for the copy of one or more storage areas.
- Use local application of the Snapshots qualifier to specify
the target for the copy of one or more snapshot files.
- Use the Directory qualifier to specify a default target
directory. The default target directory is the directory
to which all files not qualified with the Root, File,
or Snapshot qualifier are copied. It is also the default
directory for files qualified with the Root, File, or
Snapshot qualifier if the target for these qualifiers does
not include a directory specification.
Note the following when using these qualifiers:
- Global application of the File qualifier when the target
specification includes a file name causes Oracle RMU to
copy all of the storage areas to different versions of the
same file name. This creates a database that is difficult
to manage.
- Global application of the Snapshot qualifier when the
target specification includes a file name causes Oracle
RMU to copy all of the snapshot files to different versions
of the same file name. This creates a database that is
difficult to manage.
- Specifying a file name or extension with the Directory
qualifier is permitted, but causes Oracle RMU to copy
all of the files (except those specified with the File
or Root qualifier) to different versions of the same file
name. Again, this creates a database that is difficult to
manage.
See Example 8.
o You cannot use the RMU Copy_Database command to copy a
database to a remote system or to an NFS (Network File System)
mounted file system. The RMU Copy_Database command allows
you to create a copy of a database on the same node as the
original database.
o You cannot disable extents of snapshot (.snp) files.
o The file and area qualifiers for the RMU Copy_Database command
are positional qualifiers, and if placed randomly, could
be ignored or produce unexpected results. See the Command_
Qualifiers help entry for more information on positional
qualifiers.
o There are no restrictions on the use of the Nospams qualifier
with mixed page format storage areas, but the use of the
Nospams qualifier typically causes severe performance
degradation. The Nospams qualifier is only useful where
updates are rare and batched, and access is primarily by
database key (dbkey).
12.6 – Examples
Example 1
The following command makes a duplicate copy of the mf_personnel
database in the DISK1:[USER1] directory:
$ RMU/COPY_DATABASE MF_PERSONNEL /DIRECTORY=DISK1:[USER1]
Example 2
The following example shows a simple duplication of a database
within a user's directory. In this instance, the duplicated
database has the same content and identity as the original
database. After-image journal files can be interchanged between
the original and the duplicated database. Execute the RMU Dump
command with the header qualifier to verify that the copied
database is the same as the original database. Note that the
creation date listed in the header for each database is the same.
$ RMU/COPY_DATABASE MF_PERSONNEL
Example 3
The following example shows a duplication of a database within a
user's directory through the use of the Duplicate qualifier. In
this instance, the duplicated database differs from the original
database. It has the same content as the original database,
but its identity is different. As a result, .aij files cannot
be exchanged between the original database and the duplicate
database. If you use the RMU Dump command with the header
qualifier for each database, you see that the creation date for
the copy and the original database is different.
$ RMU/COPY_DATABASE/DUPLICATE MF_PERSONNEL
Example 4
The following command copies the mf_personnel database from
the DISK2:[USER2] directory to the DISK1:[USER1] directory. The
Extension=Disable qualifier causes extents to be disabled for all
the storage area (.rda) files in the DISK1:[USER1]mf_personnel
database:
$ RMU/COPY_DATABASE/EXTENSION=DISABLE/DIRECTORY=DISK1:[USER1] -
_$ DISK2:[USER2]MF_PERSONNEL
Example 5
The following command copies the mf_personnel database from the
DISK2:[USER2] directory to the DISK2:[USER1] directory. Because
the Extension=Disable qualifier is specified for only the EMPIDS_
LOW and EMPIDS_MID storage areas, extents are disabled only
for those two storage area (.rda) files in the DISK2:[USER1]mf_
personnel database:
$ RMU/COPY_DATABASE/DIRECTORY=DISK2:[USER1] DISK2:[USER2]MF_PERSONNEL -
_$ EMPIDS_LOW/EXTENSION=DISABLE,EMPIDS_MID/EXTENSION=DISABLE
Example 6
The following command uses an options file to specify that
the storage area files and snapshot (.snp) files be copied to
different disks. Note that storage area .snp files are located
on different disks from one another and from their associated
storage area (.rda) files; this is recommended for optimal
performance. (This example assumes that the disks specified for
each storage area file in options_file.opt are different from
those where the storage area files currently reside.)
$ RMU/COPY_DATABASE/OPTIONS=OPTIONS_FILE.OPT MF_PERSONNEL
The options file appears as:
$ TYPE OPTIONS_FILE.OPT
EMPIDS_LOW /FILE=DISK1:[CORPORATE.PERSONNEL]EMPIDS_LOW.RDA -
/SNAPSHOT=(FILE=DISK2:[CORPORATE.PERSONNEL]EMPIDS_LOW.SNP)
EMPIDS_MID /FILE=DISK3:[CORPORATE.PERSONNEL]EMPIDS_MID.RDA -
/SNAPSHOT=(FILE=DISK4:[CORPORATE.PERSONNEL]EMPIDS_MID.SNP)
EMPIDS_OVER /FILE=DISK5:[CORPORATE.PERSONNEL]EMPIDS_OVER.RDA -
/SNAPSHOT=(FILE=DISK6:[CORPORATE.PERSONNEL]EMPIDS_OVER.SNP)
DEPARTMENTS /FILE=DISK7:[CORPORATE.PERSONNEL]DEPARTMENTS.RDA -
/SNAPSHOT=(FILE=DISK8:[CORPORATE.PERSONNEL]DEPARTMENTS.SNP)
SALARY_HISTORY /FILE=DISK9:[CORPORATE.PERSONNEL]SALARY_HISTORY.RDA -
/SNAPSHOT=(FILE=DISK10:[CORPORATE.PERSONNEL]SALARY_HISTORY.SNP)
JOBS /FILE=DISK7:[CORPORATE.PERSONNEL]JOBS.RDA -
/SNAPSHOT=(FILE=DISK8:[CORPORATE.PERSONNEL]JOBS.SNP)
EMP_INFO /FILE=DISK9:[CORPORATE.PERSONNEL]EMP_INFO.RDA -
/SNAPSHOT=(FILE=DISK10:[CORPORATE.PERSONNEL]EMP_INFO.SNP)
RESUME_LISTS /FILE=DISK11:[CORPORATE.PERSONNEL]RESUME_LISTS.RDA -
/SNAPSHOT=(FILE=DISK12:[CORPORATE.PERSONNEL]RESUME_LISTS.SNP)
RESUMES /FILE=DISK9:[CORPORATE.PERSONNEL]RESUMES.RDA -
/SNAPSHOT=(FILE=DISK10:[CORPORATE.PERSONNEL]RESUMES.SNP)
Example 7
The following example copies the mf_personnel database from one
directory to another. In addition, by specifying the Aij_Options
qualifier to add after-image journal files, it enables fixed-size
journaling in the database copy and sets several of the journal
options as shown in the aij_journal_options.opt file.
$ RMU/COPY_DATABASE MF_PERSONNEL/DIRECTORY=DB1:[ROOT] -
/AIJ_OPTIONS=AIJ_JOURNAL_OPTIONS.OPT
$ TYPE AIJ_JOURNAL_OPTIONS.OPT
JOURNAL IS ENABLED -
RESERVE 2 -
ALLOCATION IS 1024 -
BACKUPS ARE MANUAL -
OVERWRITE IS DISABLED -
SHUTDOWN_TIMEOUT IS 120 -
CACHE IS DISABLED
ADD MF_PERS1 FILE DB2:[AIJONE]MF_PERS1.AIJ
ADD MF_PERS2 FILE DB3:[AIJTWO]MF_PERS2.AIJ
Example 8
The following example demonstrates the use of the Directory,
File, and Root qualifiers. In this example:
o The default directory is specified as DISK2:[DIR].
o The target directory and file name for the database root file
is specified with the Root qualifier. The target directory
specified with the Root qualifier overrides the default
directory specified with the Directory qualifier. Thus, Oracle
RMU copies the database root to DISK3:[ROOT] and names it
COPYRDB.RDB.
o The target directory for the EMPIDS_MID storage area is
DISK4:[FILE]. Oracle RMU copies EMPIDS_MID to DISK4:[FILE].
o The target file name for the EMPIDS_LOW storage area is
EMPIDS. Thus, Oracle RMU copies the EMPIDS_LOW storage area
to the DISK2 default directory (specified with the Directory
qualifier), and names the file EMPIDS.RDA.
o The target for the EMPIDS_LOW snapshot file is
DISK5:[SNAP]EMPIDS.SNP Thus, Oracle RMU copies the EMPIDS_
LOW snapshot file to DISK5:[SNAP]EMPIDS.SNP.
o All the other storage area files and snapshot files in the mf_
personnel database are copied to DISK2:[DIR]; the file names
for these storage areas remain unchanged.
$ RMU/COPY_DATABASE DISK1:[DB]MF_PERSONNEL.RDB -
_$ /DIRECTORY=DISK2:[DIR] -
_$ /ROOT=DISK3:[ROOT]COPYRDB.RDB -
_$ EMPIDS_MID/FILE=DISK4:[FILE], -
_$ EMPIDS_LOW/FILE=EMPIDS -
_$ /SNAPSHOT=(FILE=DISK5:[SNAP]EMPIDS.SNP)
Example 9
The following example demonstrates how to disallow exclusive mode
for a copied database. It then shows the error messages returned
when a user attempts to access the copied database using the
disallowed mode:
$ RMU/COPY_DATABASE/TRANSACTION_MODE=NOEXCLUSIVE/DIRECTORY=[.COPY] -
_$ MF_PERSONNEL.RDB
%RMU-W-DOFULLBCK, full database backup should be done to ensure future
recovery
$ SQL
SQL> ATTACH 'FILENAME mf_personnel.rdb';
SQL> SET TRANSACTION READ WRITE RESERVING EMPLOYEES FOR EXCLUSIVE WRITE;
%RDB-E-BAD_TPB_CONTENT, invalid transaction parameters in the
transaction parameter block (TPB)
-RDMS-E-INVTRANOPT, the transaction option "EXCLUSIVE WRITE" is not
allowed
SQL>
13 – Delete Optimizer Statistics
Deletes records from the RDB$WORKLOAD system table.
13.1 – Description
When you enable and collect workload statistics, the system
table, RDB$WORKLOAD, is created and populated. (See Collect_
Optimizer_Statistics for details.) If you are knowledgeable
about the data in your database, or if workload statistics were
gathered for queries that are no longer in use, you might decide
that you no longer want Oracle RMU to collect statistics for
particular column groups. The RMU Delete Optimizer_Statistics
gives you the ability to selectively delete records for column
groups in the RDB$WORKLOAD system table.
When you use the RMU Delete Optimizer_Statistics command, both
the optimizer statistics themselves and the reference to the
column duplicity factor and the null factor are deleted from the
RDB$WORKLOAD system table.
If you issue an RMU Collect Optimizer_Statistics command after
having issued an RMU Delete Optimizer_Statistics command,
statistics for the specified column group are not collected.
13.2 – Format
(B)0[mRMU/Delete Optimizer_Statistics root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Column_Group=(column-list) x See description
/[No]Log[=file-name] x See description
/Tables=(table-list) x None - Required Qualifier
13.3 – Parameters
13.3.1 – root-file-spec
root-file-spec
Specifies the database from which optimizer statistics are to be
deleted. The default file type is .rdb.
13.4 – Command Qualifiers
13.4.1 – Column Group
Column_Group=(column-list)
Specifies a list of columns that comprise a single column group.
The columns specified must be a valid column group for a table
specified with the Tables=(table-list) qualifier. (Use the RMU
Show Optimizer_Statistics command to display a valid column
groups.) When you specify the Column_Group qualifier, the entire
record in the RDB$WORKLOAD system table that holds data for the
specified column group is deleted. Therefore, the next time you
issue the RMU Collect Optimizer_Statistics command, statistics
for the specified column-group are not collected.
13.4.2 – Log
Log
Nolog
Log=file-name
Specifies whether the statistics deleted from the RDB$WORKLOAD
system table are to be logged. Specify the Log qualifier to have
the information displayed to SYS$OUTPUT. Specify the Log=file-
spec qualifier to have the information written to a file. Specify
the Nolog qualifier to prevent display of the information. If you
do not specify any variation of the Log qualifier, the default is
the current setting of the DCL verify switch. (The DCL SET VERIFY
command controls the DCL verify switch.)
13.4.3 – Tables
Tables=(table-list)
Specifies the table or tables for which column group entries are
to be deleted, as follows:
o If you specify the Tables=(table-list) qualifier, but do
not specify the Column_Group qualifier, then all column
group entries for the listed tables are deleted from the
RDB$WORKLOAD system table.
o If you specify the Tables=(table-list) qualifier, and you
specify the Column_Group=(column-list) qualifier, then the
workload statistics entries for the specified tables that
have exactly the specified column group are deleted from the
RDB$WORKLOAD system table.
o If you use an asterisk (*) with the Tables qualifier
(Tables=*), all tables registered in the RDB$WORKLOAD table
are deleted. This allows the RDB$WORKLOAD table to be purged.
If you issue an RMU Collect Optimizer_Statistics command after
you have deleted a workload column group from the RDB$WORKLOAD
system table, those statistics are no longer collected.
The Tables=(table-list) qualifier is a required qualifier; you
cannot issue an RMU Delete Optimizer_Statistics command without
the Tables=(table-list) qualifier.
13.5 – Usage Notes
o To use the RMU Delete Optimizer_Statistics command for a
database, you must have the RMU$ANALYZE privilege in the root
file access control list (ACL) for the database or the OpenVMS
SYSPRV or BYPASS privilege.
o Cardinality statistics are automatically maintained by
Oracle Rdb. Physical storage and workload statistics are only
collected when you issue an RMU Collect Optimizer_Statistics
command. To get information about the usage of physical
storage and workload statistics for a given query, define
the RDMS$DEBUG_FLAGS logical name to be "O". For example:
$ DEFINE RDMS$DEBUG_FLAGS "O"
When you execute a query, if workload and physical statistics
have been used in optimizing the query, you will see a line
such as the following in the command output:
~O: Workload and Physical statistics used
o Oracle Corporation recommends that you execute an RMU Show
Optimizer_Statistics command with the Output qualifier prior
to executing an RMU Delete Optimizer_Statistics command. If
you accidentally delete statistics, you can replace them
by issuing an RMU Insert Optimizer_Statistics command and
specifying the statistical values contained in the output
file.
13.6 – Examples
Example 1
The following example issues commands to do the following:
1. Display optimizer statistics for the EMPLOYEES and JOB_HISTORY
tables and their indexes
2. Delete the entries for the column group (EMPLOYEE_ID, JOB_
CODE, JOB_START, JOB_END, DEPARTMENT_CODE, SUPERVISOR_ID) in
JOB_HISTORY
$ RMU/SHOW OPTIMIZER_STATISTICS MF_PERSONNEL.RDB -
_$ /TABLES=(EMPLOYEES, JOB_HISTORY)/STATISTICS=(WORKLOAD)
-----------------------------------------------------------------------
Optimizer Statistics for table : EMPLOYEES
Workload Column group : EMPLOYEE_ID
Duplicity factor : 1.0000000
Null factor : 0.0000000
First created time : 3-JUL-1996 10:37:36.43
Last collected time : 3-JUL-1996 10:46:10.73
Workload Column group : LAST_NAME, FIRST_NAME, MIDDLE_INITIAL,
ADDRESS_DATA_1, ADDRESS_DATA_2, CITY, STATE, POSTAL_CODE, SEX,
BIRTHDAY, STATUS_CODE
Duplicity factor : 1.5625000
Null factor : 0.3600000
First created time : 3-JUL-1996 10:37:36.43
Last collected time : 3-JUL-1996 10:46:10.74
----------------------------------------------------------------------
Optimizer Statistics for table : JOB_HISTORY
Workload Column group : EMPLOYEE_ID
Duplicity factor : 2.7400000
Null factor : 0.0000000
First created time : 3-JUL-1996 10:37:36.43
Last collected time : 3-JUL-1996 10:54:09.62
Workload Column group : EMPLOYEE_ID, JOB_CODE, JOB_START,
JOB_END, DEPARTMENT_CODE, SUPERVISOR_ID
Duplicity factor : 1.5930233
Null factor : 0.3649635
First created time : 3-JUL-1996 10:57:47.65
Last collected time : 3-JUL-1996 10:57:47.65
$ !
$ ! Delete one of the entries for JOB_HISTORY
$ !
$ RMU/DELETE OPTIMIZER_STATISTICS MF_PERSONNEL.RDB/TABLE=(JOB_HISTORY) -
_$ /COLUMN_GROUP=(EMPLOYEE_ID,JOB_CODE,JOB_START,JOB_END, -
_$ DEPARTMENT_CODE,SUPERVISOR_ID)/LOG
Changing RDB$SYSTEM area to READ_WRITE.
Workload column group deleted for JOB_HISTORY : EMPLOYEE_ID,
JOB_CODE, JOB_START, JOB_END, DEPARTMENT_CODE,
SUPERVISOR_ID
14 – Dump
Dumps the contents of database files, including: storage area
files, snapshot files, recovery-unit journal files, after-image
journal files, optimized after-image journal files, and root
files. You can dump database file contents to your terminal
screen or to a text file.
14.1 – Database
Displays or writes to a specified output file the contents
of database, storage area (.rda), and snapshot (.snp) files,
including root information.
NOTE
The Start and End qualifiers apply only when the Areas,
Lareas, Snapshots, Abms_Only or Spams_Only qualifier is
specified.
14.1.1 – Description
Use this command to examine the contents of your database root
(.rdb), storage area (.rda), and snapshot (.snp) files, to
display current settings for database definition options, and
to display a list of active database users. The list of database
users is maintained clusterwide in a VMScluster environment.
You can display the contents of all pages in any data storage
area of the database or display the contents of just those pages
in which rows and indexes for a specific table are stored.
See the chapter that explains the internal database page format
in the Oracle Rdb Guide to Database Maintenance for tutorial
information.
Depending on your selection of qualifiers, the RMU Dump command
can list:
o A formatted display of any number of pages in the storage area
of the database.
o A formatted display of any number of pages in a uniform
logical area of the database.
o A formatted display of any number of pages in the snapshot
area of the database.
o Header information. (This is listed by default if no
qualifiers are specified.)
o Current users of the database.
14.1.2 – Format
(B)0[m RMU/Dump root-file-spec
[4mFile[m [4mQualifiers[m x [4mDefaults[m
x
/ABMS_Only x See description
/[No]Areas [= storage-area-list] x /Noareas
/End=integer x See description
/[No]Header[=detail-opt, type-opts] x See description
/[No]Lareas [ = logical-area-list] x /Nolareas
/Option={Normal | Full | Debug} x /Option=Normal
/Output = file-name x /Output=SYS$OUTPUT
/Restore_Options=file-name x None
/[No]Snapshots [ = storage-area-list] x /Nosnapshots
/Spams_Only x See description
/Start=integer x See description
/State=Blocked x See description
/[No]Users x /Nousers
14.1.3 – Parameters
14.1.3.1 – root-file-spec
A file specification for the database root file whose root file
header information, user information, storage area file pages, or
snapshot area file pages you want to display.
14.1.4 – Command Qualifiers
14.1.4.1 – ABMS Only
Specifies that the RMU/DUMP command will only dump ABM pages
in uniform storage areas or in logical areas contained within
uniform storage areas.
The ABM pages can be dumped within a limited page range specified
by the START and END qualifiers.
If there are no ABM pages within the specified page range or
the storage area is a mixed format area or the logical area
is contained within a mixed storage area, no ABM pages will be
dumped.
This qualifier cannot be specified in the same Dump command as
the SPAMS_Only qualifier. This qualifier cannot be specified in
the same Dump command with the Snapshots qualifier.
14.1.4.2 – Areas
Areas [=storage-area-list]
Noareas
Specifies a display that consists of storage area pages. You can
specify storage areas by name or by the area's ID number.
If you specify more than one storage area, separate the storage
area names or ID numbers in the storage area list with a comma,
and enclose the list within parentheses.
You can also specify the Areas=* qualifier to display all storage
areas. If you do not specify the Areas qualifier, none of the
storage areas are displayed.
You can use the Start and End qualifiers to display a range of
storage area pages.
The Areas qualifier can be used with indirect file references.
See the Indirect-Command-Files help entry for more information.
14.1.4.3 – End
End=integer
Specifies the highest-numbered area or snapshot page to include
in the display. The default is the last page.
If you also use the Lareas qualifier, note that the Start and End
qualifiers specify a page range relative to the logical area, not
a specific storage area page number.
14.1.4.4 – Header
Header
Noheader
Header[=(detail-opt, type-opts)]
Indicates whether to include the database header in the output.
Specify the Header qualifier to include all database header
information in the output. Specify the Noheader qualifier to
suppress the database header listing. Specify the Header=(detail-
opt, type-opts) qualifier to limit the output from the header to
specific items of interest. Use the detail-opt options (Brief or
Detail) to limit the amount of output. Use the type-opt options
to limit the output to specific types of information.
RMU Dump Command Header Options summarizes the Header options and
the effects of specifying each option.
Table 8 RMU Dump Command Header Options
Option Effect
All Generates the full output of all the header
information. If you specify this option and
other Header options, the other options are
ignored. This is the default option.
Areas Output displays information about active
storage areas and snapshot areas.
Backup Output displays information about backup and
recovery.
Brief Generates a summary of the requested database
root file information.
Buffers Output displays information about database
buffers.
Corrupt_Page Output displays the Corrupt Page Table (CPT).
Detail Generates a complete report of the requested
database root file information. This is the
default.
Fast_Commit Output displays information about whether
fast commit is enabled or disabled, whether
commit to AIJ optimization is enabled or
disabled, the AIJ checkpointing intervals,
and the transaction interval.
Hot_Standby Output displays information regarding hot
standby databases.
Locking Output displays information about database
locking, such as whether or not adjustable
record locking, carry-over lock optimization,
and lock tree partitioning are enabled or
disabled, and fanout factors.
Journaling Output displays information about RUJ and AIJ
journaling.
Nodes Output displays names of nodes that are
accessing the specified database.
Parameters Output displays basic root file header
information.
Root_Record Output describes the Oracle Rdb specific
section of the database root. This includes
backup, restore, verify, and alter timestamps,
as well as flags that indicate that no such
operation has been performed. The bootstrap
DBKEY is used to locate the RDB$DATABASE
row for this database, and then the other
system tables. If an alternate bootstrap
DBKEY exists, then this database has been
converted using RMU Convert Nocommit command.
In this case, the current metadata version is
displayed.
Row_Caches Output displays information about row caches.
Security_Audit Output displays information about security
auditing.
Sequence_Numbers Output displays database sequence numbers.
Users Output displays information about active
database users.
If you specify both the Detail option and the Brief option,
Detail takes precedence. If you specify the All option and other
detail-opt options, the All option takes precedence. If you
specify the Brief option or the Detail option only, the default
for the type-opt is All. If you specify type-opts options, but do
not specify a detail-opt option, the default for the detail-opt
is Detail.
If you specify more than one option, separate the options with
commas and enclose the list within parentheses.
See the Usage_Notes help entry under this command for information
on understanding the derived values found in the database header.
The Header=All and Header=Root_Record qualifiers output
information on the use of the RMU Alter command on the specified
database. For example, you see the following line in the output
if you have never used the RMU Alter command on the database:
Database has never been altered
Do not confuse this with alterations made by SQL ALTER
statements. Information about alterations made with the SQL
ALTER statement is not included in the output from the RMU Dump
command.
If you specify the Areas, Lareas, or Snapshots qualifier, the
Noheader qualifier is the default. Otherwise, Header=(All,
Detail) is the default.
It is invalid to specify the Header=Root_Record and the
Option=Debug qualifiers in the same Oracle RMU command line.
See the Oracle Rdb7 and Oracle CODASYL DBMS: Guide to Hot
Standby Databases manual for information about the "Hot Standby"
references in the database header.
For complete information on the contents of the database header,
see the Oracle Rdb Guide to Database Maintenance.
14.1.4.5 – Lareas
Lareas[=logical-area-list]
Nolareas
Specifies a display that consists of storage area pages allocated
to a logical area or areas. In a single-file database, each table
in the database is stored in its own logical area.
You cannot use the Lareas qualifier with logical areas that are
stored in storage areas that have a mixed page format.
If you specify more than one logical area name, separate the
storage area names in the logical area list with a comma, and
enclose the list within parentheses.
You can also specify the Lareas=* qualifier to display all
logical areas that have a uniform page format.
The default is the Nolareas qualifier.
The Lareas qualifier can be used with indirect file references.
See the Indirect-Command-Files help entry for more information.
14.1.4.6 – Option
Option=type
Specifies the type of information and level of detail the output
will include. Three types of output are available:
o Normal
The output includes summary information. This is the default.
o Full
In addition to the Normal information, the output includes
more detailed information.
o Debug
In addition to Normal and Full information, the output
includes internal information about the data. In general,
use the Debug option for diagnostic support purposes.
14.1.4.7 – Output
Output=file-name
Specifies the name of the file where output is to be sent. The
default is SYS$OUTPUT. The default output file type is .lis, if
you specify a file name.
14.1.4.8 – Restore Options
Restore_Options=file-name
Generates an options file designed to be used with the Options
qualifier of the RMU Restore command.
The Restore_Options file is created by reading the database root
file. Therefore, there is no guarantee that this options file
will work with all backup files you attempt to restore with
a Restore operation. For example, if areas have been added or
deleted from the database since the backup file was created,
there will be a mismatch between the Restore_Options file and the
backup file. Similarly if the backup file was created by a backup
by-area operation, the Restore_Options file may refer to areas
that are not in the backup file.
By default a Restore_Options file is not created. If you
specify the Restore_Options qualifier and a file, but not a file
extension, Oracle RMU uses an extension of .opt by default.
14.1.4.9 – Snapshots
Snapshots[=storage-area-list]
Nosnapshots
Specifies a display that consists of snapshot file pages. The
RMU Dump command does not display snapshot pages if you omit the
Snapshots qualifier or if you specify the Nosnapshots qualifier.
In a single-file database, there is only one snapshot file. In
a multifile database, each storage area has a corresponding
snapshot file. Note that this parameter specifies the storage
area name, not the snapshot file name. If you specify more than
one storage area name, separate the storage area names with
commas, and enclose the storage-area-list within parentheses.
If you specify the Snapshots qualifier without a storage area
name, information is displayed for all snapshot files.
You can use the Start and End qualifiers to display a range of
snapshot file pages.
The default is the Nosnapshots qualifier.
The Snapshots qualifier can be used with indirect file
references. See the Indirect-Command-Files help entry for more
information.
14.1.4.10 – Spams Only
Spams_Only
Allows you to dump only the space area management (SPAM) pages in
the selected areas and page range.
A common usage for the RMU Dump command is to track down problems
with storage allocation and record placement. When this qualifier
is used, the SPAM pages are dumped, allowing you to locate the
individual data pages that you want to examine.
There is no negated form for this qualifier, and, if it is
omitted, all the selected pages are dumped.
The Start and End qualifiers can be used with the Spams_Only
qualifier.
14.1.4.11 – Start
Start=integer
Specifies the lowest-numbered area or snapshot page to include in
the display. The default is the first page; that is, the Start=1
qualifier.
If you also use the Lareas qualifier, note that the Start and End
qualifiers specify a page range relative to the logical area, not
a specific storage area page number.
14.1.4.12 – State
State=Blocked
Specifies a list of all unresolved distributed transactions in
the blocked database. A blocked database is a database that is
not committed or rolled back and is involved in an unresolved
distributed transaction. The State=Blocked qualifier displays the
following information about each transaction:
o Process identification (PID)
o Stream identification
o Monitor identification
o Transaction identification
o Name of the recovery journal
o Transaction sequence number (TSN)
o Distributed transaction identifier (TID)
o Name of the node on which the failure occurred
o Name of the node initiating the transaction (parent node)
You can use the State=Blocked qualifier only with the Users
qualifier. For information on resolving unresolved transactions
with the RMU Dump command, see the Oracle Rdb7 Guide to
Distributed Transactions.
14.1.4.13 – Users
Users
Nousers
Lists information about the current users of the database,
including all users in a VMScluster environment. Oracle RMU does
not consider a process that is running the Performance Monitor
(with the RMU Show Statistics command or through the Windowing
interface) to be a database user.
The default is Nousers.
14.1.5 – Usage Notes
o To use the RMU Dump command with the Areas qualifier or the
Lareas qualifier or the Snapshots qualifier for a database,
you must have the RMU$DUMP privilege in the root file access
control list (ACL) for the database or the OpenVMS SYSPRV or
BYPASS privilege.
To use the RMU Dump command with the Header qualifier for a
database, you must have the RMU$DUMP, RMU$BACKUP, or RMU$OPEN
privileges in the root file access control list (ACL) for the
database, or the OpenVMS SYSPRV or BYPASS privilege.
To use the RMU Dump command with the Users qualifier, you must
have the RMU$DUMP, RMU$BACKUP, or RMU$OPEN privileges in the
root file access control list (ACL) for the database or the
OpenVMS WORLD privilege.
o The Spams_Only qualifier conflicts with the Lareas and
Snapshots qualifiers; an error is generated if you specify
the Spams_Only qualifier with either of the other qualifiers.
o The Header=All and Header=Buffers qualifiers provide two
derived values to provide an estimated size of the global
section. These appear in the dump file as:
Derived Data...
- Global section size
With global buffers disabled is 43451 bytes
With global buffers enabled is 941901 bytes
The first value (With global buffers disabled) indicates the
approximate size of the global section when local buffers are
being used. The second value (With global buffers enabled)
indicates the approximate size of the global section if you
were to enable global buffers.
You can use these values to determine approximately how
much bigger the global section becomes if you enable global
buffers. This allows you to determine, without having to
take the database off line, how much larger to make the
VIRTUALPAGECNT and GBLPAGES SYSGEN parameters to accommodate
the larger global section.
However, note that you must take the database off line if
you decide to enable global buffers and you must shut down
and reboot the system to change the SYSGEN parameters. It
is recommended that you run AUTOGEN after you change SYSGEN
parameters.
Also note that these changes may require you to change the
MONITOR account quotas as well to ensure the paging file quota
is adequate.
14.1.6 – Examples
Example 1
The following example displays the header information for the
mf_personnel database on the terminal screen:
$ RMU/DUMP MF_PERSONNEL
Example 2
The following example generates a list of unresolved transactions
for the mf_personnel database:
$ RMU/DUMP/USERS/STATE=BLOCKED MF_PERSONNEL
Example 3
The following example shows the command you might use to view the
SPAM pages associated with the area EMPIDS_LOW:
$ RMU/DUMP/NOHEADER/AREAS=(EMPIDS_LOW)/SPAMS_ONLY -
_$ MF_PERSONNEL/OUTPUT=DUMP.LIS
Example 4
The following example demonstrates the use of the Restore_Options
qualifier. The first command performs a dump operation on the mf_
personnel database and creates a Restore_Options file. The second
command shows a portion of the contents of the options file. The
last command demonstrates the use of the options file with the
RMU Restore command.
$ RMU/DUMP MF_PERSONNEL.RDB /RESTORE_OPTIONS=MF_PERS.OPT -
_$ /OUTPUT=DUMP.LIS
$ TYPE MF_PERS.OPT
! Options file for database USER1:[DB]MF_PERSONNEL.RDB;1
! Created 19-JUL-1995 14:55:17.80
! Created by DUMP command
RDB$SYSTEM -
/file=USER2:[STO]MF_PERS_DEFAULT.RDA;1 -
/extension=ENABLED -
/read_write -
/spams -
/snapshot=(allocation=100, -
file=USER2:[SNP]MF_PERS_DEFAULT.SNP;1)
DEPARTMENTS -
/file=USER3:[STO]DEPARTMENTS.RDA;1 -
/blocks_per_page=2 -
/extension=ENABLED -
/read_write -
/spams -
/thresholds=(70,85,95) -
/snapshot=(allocation=100, -
file=USER3:[SNP]DEPARTMENTS.SNP;1)
.
.
.
$ RMU/RESTORE MF_PERSONNEL.RBF/OPTIONS=MF_PERS.OPT
Example 5
The following command generates a detailed display of backup,
recovery, RUJ, and AIJ information for the mf_personnel database.
$ RMU/DUMP/HEADER=(BACKUP,JOURNALING) MF_PERSONNEL.RDB
See the Oracle Rdb Guide to Database Maintenance and the Oracle
Rdb7 Guide to Distributed Transactions for more examples showing
the RMU Dump command and the output.
Example 6
The following example dumps all ABM pages contained in all
uniform storage areas in the specified Rdb database.
$ RMU/DUMP/ABMS_ONLY/OUT=DMP.OUT MF_PERSONNEL
Example 7
In the following example, only the ABM pages contained in the
named uniform storage area in the specified Rdb database are
dumped.
$ RMU/DUMP/ABMS_ONLY/AREA=RDB$SYSTEM MF_PERSONNEL
Example 8
In the following example, only the ABM pages contained in the
named logical area in a uniform storage area in the specified Rdb
database are dumped.
$ RMU/DUMP/ABMS_ONLY/LAREA=RDB$RELATIONS MF_PERSONNEL
Example 9
In the following example, only the ABM pages contained within
the specified page range in the named uniform storage area in the
specified Rdb database are dumped.
$ RMU/DUMP/ABMS_ONLY/AREA=RDB$SYSTEM/START=1/END=5 MF_PERSONNEL
14.2 – After journal
Displays an after-image journal (.aij) file, a backed up .aij
file (.aij if the backup is on disk, .aij_rbf if the .aij file
was backed up to tape), or an optimized after-image journal
(.oaij) file in ASCII format. Use this command to examine the
contents of your .aij, .aij_rbf, or .oaij file. Whenever the
term .aij file is used in this RMU Dump After_Journal command
description, it refers to .oaij and .aij_rbf files, as well as
.aij files.
An .aij file contains header information and data blocks. Header
information describes the data blocks, which contain copies of
data stored in the database file.
14.2.1 – Description
The RMU Dump After_Journal command specifies an .aij file, not a
database file, as its parameter, and is a separate command from
the RMU Dump command used to display database areas and header
information.
The .aij file is in binary format. This command translates the
binary file into an ASCII display format.
The RMU Dump After_Journal command always includes the header of
the .aij file in the display. You can use the Nodata qualifier to
exclude data blocks from the display entirely, or you can use the
Start and End qualifiers to restrict the data block display to
a specific series of blocks. If you do not specify any of these
qualifiers, Oracle RMU includes all data blocks.
14.2.2 – Format
(B)0[mRMU/Dump/After_Journal aij-file-name
[4mFile[m [4mQualifiers[m x [4mDefaults[m
x
/Active_IO=max-reads x /Active_IO=3
/Area=integer x None
/[No]Data x /Data
/Encrypt=({Value=|Name=}[,Algorithm=]) x See description
/End=integer x See description
/First=(select-list) x See description
/Format={Old_File|New-Tape} x Format=Old_File
/Label=(label-name-list) x See description
/Larea=integer x None
/Last=(select-list) x See description
/Librarian[=options] x None
/Line=integer x None
/[No]Media_Loader x See description
/Only=(select-list) x See description
(B)0[m/Option={Statistics|Nostatistics} x Option=Statistics
/Output=file-name x /Output=SYS$OUTPUT
/Page=integer x None
/Prompt={Automatic|Operator|Client} x See description
/No]Rewind x Norewind
/Start=integer x See description
/State=Prepared x See description
14.2.3 – Parameters
14.2.3.1 – aij-file-name
The .aij file you want to display. The default file type is .aij.
For .oaij files, you must specify the file type of .oaij.
14.2.4 – Command Qualifiers
14.2.4.1 – Active IO
Active_IO=max-reads
Specifies the maximum number of read operations from a backup
device that the RMU Dump After_Journal command will attempt
simultaneously. This is not the maximum number of read operations
in progress; that value is the product of active system I/O
operations.
The value of the
Active_IO qualifier can range from 1 to 5. The default value
is 3. Values larger than 3 can improve performance with some tape
drives.
14.2.4.2 – Area
Area=integer
Identifies a physical database storage area by number. Dump
output is limited to the specified area. The minimum value is
1.
14.2.4.3 – Data
Data
Nodata
Specifies whether you want to display data blocks of the .aij
file, or just the .aij file header.
The Data qualifier is the default. It causes the display of the
.aij file data blocks (in addition to the file header) in an
ASCII display format.
The Nodata qualifier limits the display to the record headers of
the .aij file.
14.2.4.4 – Encrypt
Encrypt=({Value=|Name=}[,Algorithm=])
The Encrypt qualifier decrypts the file of an after-image journal
backup.
Specify a key value as a string or the name of a predefined key.
If no algorithm name is specified the default is DESCBC. For
details on the Value, Name and Algorithm parameters type HELP
ENCRYPT at the OpenVMS prompt.
This feature requires the OpenVMS Encrypt product to be installed
and licensed on your system.
This feature only works for a newer format backup file which
has been created using the Format=New_Tape qualifier. You must
specify the Format=New_Tape qualifier with this command if you
use the Encrypt qualifier.
14.2.4.5 – End
End=integer
Specifies the number of the last data block that you want to
display. The default integer is the number of the last data block
in the file. If you do not use the End qualifier, Oracle RMU
displays the entire .aij file.
14.2.4.6 – First
First=(select-list)
Allows you to specify where you want the dump output to begin.
(See the Last=(select-list) qualifier for the end of the range.)
If you specify more than one keyword in the select-list, separate
the keywords with commas and enclose the list in parentheses.
If you specify multiple items in the select list, the first
occurrence is the one that will activate Oracle RMU. For example,
if you specify First=(Block=100,TSN=0:52),the dump will start
when either block 100 or TSN 52 is encountered.
The First and Last qualifiers are optional. You can specify both,
either, or neither of them. The keywords specified for the First
qualifier can differ from the keywords specified for the Last
qualifier.
The select-list of the First qualifier consists of a list of one
or more of the following keywords:
o BLOCK=block-number
Specifies the first block in the AIJ journal.
o RECORD=record-number
Specifies the first record in the AIJ journal. This is the
same as the existing Start qualifier, which is still supported
but obsolete.
o TID=tid
Specifies the first TID in the AIJ journal.
o TIME=date_time
Specifies the first date and time in the AIJ journal, using
absolute or delta date-time format.
o TSN=tsn
Specifies the first TSN in the AIJ journal, using the standard
[n:]m TSN format.
By default, the entire .aij file is dumped.
14.2.4.7 – Format
Format=Old_File
Format=New_Tape
Specifies whether the backup or optimized .aij file was written
in the old (disk-optimized) or the new (tape-optimized) format.
If you enter the RMU Dump After_Journal command without the
Format qualifier, the default is the Format=Old_Tape qualifier.
You must specify the same Format qualifier as was used with the
RMU Backup After_Journal command or the RMU Optimize After_
Journal command. If your .aij file resides on disk, you should
use the Format=Old_File qualifier.
If you specified the Format=Old_File qualifier when you optimized
or backed up the .aij file to tape, you must mount the backup
media by using the DCL MOUNT command before you issue the RMU
Dump After_Journal command. Because the RMU Dump After_Journal
command uses RMS to read the tape, the tape must be mounted as
an OpenVMS volume (that is, do not specify the /FOREIGN qualifier
with the MOUNT command).
If you specify the Format=New_Tape qualifier, you must mount the
backup media by using the DCL MOUNT /FOREIGN command before you
issue the RMU Dump After_Journal command.
Similarly, if you specify OpenVMS access (you do not specify
the /FOREIGN qualifier on the DCL MOUNT command) although your
.aij backup was created using the Format=New_Tape qualifier, you
receive an RMU-F-MOUNTFOR error.
The following tape qualifiers have meaning only when used in
conjunction with the Format=New_Tape qualifier:
Active_IO
Label
Rewind
14.2.4.8 – Label
Label=(label-name-list)
Specifies the 1- to 6-character string with which the volumes
of the backup file have been labeled. The Label qualifier is
applicable only to tape volumes. You must specify one or more
label names when you use the Label qualifier.
You can specify a list of tape labels for multiple tapes. If you
list multiple tape label names, separate the names with commas
and enclose the list of names within parentheses.
In a normal dump after-journal operation, the Label qualifier
you specify with the RMU Dump After_Journal command should be
the same Label qualifier you specified with the RMU Backup After_
Journal command to back up your after-image journal file.
The Label qualifier can be used with indirect file references.
See Indirect-Command-Files for more information.
14.2.4.9 – Larea
Larea=integer
Identifies a logical database storage area by number. Dump output
is limited to the specified area. The minimum value is 0.
14.2.4.10 – Last
Last=(select-list)
Allows you to specify where you want the dump output to end. (See
the First=(select-list) qualifier for the beginning range.) If
you specify more than one keyword in the select-list, separate
the keywords with commas and enclose the list in parentheses.
If you specify multiple items in the select list, the first
occurrence is the one that will activate Oracle RMU.
The First and Last qualifiers are optional. You can specify both,
either, or neither of them. The keywords specified for the First
qualifier can differ from the keywords specified for the Last
qualifier.
The select-list of the Last qualifier consists of a list of one
or more of the following keywords:
o BLOCK=block-number
Specifies the last block in the AIJ journal.
o RECORD=record-number
Specifies the last record in the AIJ journal. This is the same
as the existing End qualifier, which is still supported but
obsolete.
o TID=tid
Specifies the last TID in the AIJ journal.
o TIME=date_time
Specifies the last date and time in the AIJ journal, using
absolute or delta date-time format.
o TSN=tsn
Specifies the last TSN in the AIJ journal, using the standard
[n:]m TSN format.
By default, the entire .aij file is dumped.
14.2.4.11 – Librarian
Librarian=options
Use the Librarian qualifier to restore files from data archiving
software applications that support the Oracle Media Management
interface. The file name specified on the command line identifies
the stream of data to be retrieved from the Librarian utility. If
you supply a device specification or a version number it will be
ignored.
Oracle RMU supports retrieval using the Librarian qualifier only
for data that has been previously stored by Oracle RMU using the
Librarian qualifer.
The Librarian qualifier accepts the following options:
o Trace_file=file-specification
The Librarian utility writes trace data to the specified file.
o Level_Trace=n
Use this option as a debugging tool to specify the level of
trace data written by the Librarian utility. You can use a
pre-determined value of 0, 1, or 2, or a higher value defined
by the Librarian utility. The pre-determined values are :
- Level 0 traces all error conditions. This is the default.
- Level 1 traces the entry and exit from each Librarian
function.
- Level 2 traces the entry and exit from each Librarian
function, the value of all function parameters, and the
first 32 bytes of each read/write buffer, in hexadecimal.
o Logical_Names=(logical_name=equivalence-value,...)
You can use this option to specify a list of process logical
names that the Librarian utility can use to specify catalogs
or archives where Oracle Rdb backup files are stored,
Librarian debug logical names, and so on. See the specific
Librarian documentation for the definition of logical names.
The list of process logical names is defined by Oracle RMU
prior to the start of any Oracle RMU command that accesses the
Librarian utility.
The following OpenVMS logical names must be defined for use with
a Librarian utility before you execute an Oracle RMU backup or
restore operation. Do not use the Logical_Names option provided
with the Librarian qualifier to define these logical names.
o RMU$LIBRARIAN_PATH
This logical name must be defined so that the shareable
Librarian image can be loaded and called by Oracle RMU backup
and restore operations. The translation must include the file
type (for example, .exe), and must not include a version
number. The shareable Librarian image must be an installed
(known) image. See the Librarian utility documentation for
the name and location of this image and how it should be
installed.
o RMU$DEBUG_SBT
This logical name is not required. If it is defined, Oracle
RMU will display debug tracing information messages from
modules that make calls to the Librarian shareable image.
You cannot use device specific qualifiers such as Rewind,
Density, or Label with the Librarian qualifier because the
Librarian utility handles the storage meda, not Oracle RMU.
14.2.4.12 – Line
Area=integer
Identifies a database line number. Dump output is limited to
the specified line. The minimum value is 0. This qualifier is
intended for use during analysis or debugging.
14.2.4.13 – Media Loader
Media_Loader
Nomedia_Loader
Use the Media_Loader qualifier to specify that the tape device
from which the file is being read has a loader or stacker. Use
the Nomedia_Loader qualifier to specify that the tape device does
not have a loader or stacker.
By default, if a tape device has a loader or stacker, Oracle
RMU should recognize this fact. However, occasionally Oracle RMU
does not recognize that a tape device has a loader or stacker.
Therefore, when the first tape has been read, Oracle RMU issues a
request to the operator for the next tape, instead of requesting
the next tape from the loader or stacker. Similarly, sometimes
Oracle RMU behaves as though a tape device has a loader or
stacker when actually it does not.
If you find that Oracle RMU is not recognizing that your tape
device has a loader or stacker, specify the Media_Loader
qualifier. If you find that Oracle RMU expects a loader or
stacker when it should not, specify the Nomedia_Loader qualifier.
14.2.4.14 – Only
Only=(select-list)
Allows you to specify one select list item to output. (See also
the First=(select-list) and Last=(select-list) qualifiers for
specifying a range.) If you specify more than one keyword in the
select-list, separate the keywords with commas and enclose the
list in parentheses. If you specify multiple items in the select
list, the first occurrence is the one that will activate Oracle
RMU.
The Only qualifier is optional.
The select-list of the Only qualifier consists of a list of one
or more of the following keywords:
o TID=tid
Specifies a TID in the AIJ journal.
o TSN=tsn
Specifies a TSN in the AIJ journal, using the standard [n:]m
TSN format.
o Type=type-list
Specifies the types of records to be dumped. The type-list
consists of a list of one or more of the following keywords:
- Ace_header
Type=A records
- Checkpoint
Type=B records
- Close
Type=K records
- Commit
Type=C records
- Data
Type=D records
- Group
Type=G records
- Information
Type=N records
- Open
Type=O records
- Optimize_information
Type=I records
- Prepare
Type=V records
- Rollback
Type=R records
By default, the entire .aij file is dumped.
14.2.4.15 – Option
Option=Statistics
Option=Nostatistics
The Option=Statistics qualifier specifies that you want Oracle
RMU to include statistics on how frequently database pages are
referenced by the data records in the .aij file. In addition, if
the database root file is available, the output created by the
Options=Statistics qualifier includes the value to specify for
the Aij_Buffers qualifier of the RMU Recover command. If several
.aij files will be used in your recovery operation, perform an
RMU Dump After_Journal on each .aij file and add the recommended
Aij_Buffer values. Use the total as the value you specify with
the Aij_Buffers qualifier. See Example 2 in the Examples help
entry under this command for an example using this qualifier.
Note that the value recommended for the RMU Recover command's
Aij_Buffers qualifier is the exact number of buffers required
by the data records in the specified .aij file. If you specify
fewer buffers, you may see more I/O, but you will not necessarily
see performance degrade. (Performance also depends on whether
asynchronous batch-writes are enabled.)
Using more buffers than are recommended may result in your
process doing more paging than required, and if so, performance
degrades.
If you specify the recommended value, note that this does not
mean that no buffers are replaced during the recovery operation.
The Oracle RMU buffer replacement strategy is affected by
whether asynchronous prefetches and asynchronous batch-writes are
enabled, and on the contents of the buffers before the recovery
operation begins.
If the database root file is not available, the Option=Statistics
qualifier does not provide a value for the RMU Recover command's
Aij_Buffers qualifier. However, it does provide the statistics on
the frequency with which each page is accessed.
Specify the Option=Nostatistics qualifier to suppress .aij
statistics generation.
The default for the RMU Dump After_Journal command is
Option=Statistics.
14.2.4.16 – Output
Output=file-name
Specifies the name of the file where output will be sent. The
default is SYS$OUTPUT. The default file type is .lis, if you
specify a file name.
14.2.4.17 – Page
Page=integer
Identifies a database page number. Dump output is limited to
the specified page. The minimum value is 1. This qualifier is
intended for use during analysis or debugging.
14.2.4.18 – Prompt
Prompt=Automatic
Prompt=Operator
Prompt=Client
Specifies where server prompts are to be sent. When you specify
Prompt=Automatic, prompts are sent to the standard input device,
and when you specify Prompt=Operator, prompts are sent to the
server console. When you specify Prompt=Client, prompts are sent
to the client system.
14.2.4.19 – Rewind
Rewind
Norewind
Specifies that the magnetic tape that contains the backup file
will be rewound before processing begins. The tape is searched
for the backup file starting at the beginning-of-tape (BOT). The
Norewind qualifier is the default and causes a search for the
backup file to be started at the current tape position.
The Rewind and Norewind qualifiers are applicable only to tape
devices.
14.2.4.20 – Start
Start=integer
Specifies the number of the first data block that you want to
display. If you do not use the Start qualifier, the display
begins with the first record in the .aij file.
14.2.4.21 – State
State=Prepared
Specifies a list of all records associated with unresolved
transactions.
For more information on listing unresolved transactions with
the RMU Dump After_Journal command, see the Oracle Rdb7 Guide to
Distributed Transactions.
14.2.5 – Usage Notes
o The First and Last qualifiers have been added to make
dumping portions of the .aij file easier. The Start and End
qualifiers were intended to provide similar functionality,
but are difficult to use because you seldom know, nor can you
determine, the AIJ record number prior to issuing the command.
o Be careful when searching for TSNs or TIDs as they are not
ordered in the AIJ journal. For example, if you want to
search for a specific TSN, use the Only qualifier and not
the First and Last qualifiers. For example, assume the AIJ
journal contains records for TSN 150, 170, and 160 (in that
order). If you specify the First=TSN=160 and Last=TSN=160
qualifiers, nothing will be dumped because TSN 170 will match
the Last=TSN=160 criteria.
o To use the RMU Dump After_Journal command for an .aij file,
you must have the RMU$DUMP privilege in the root file access
control list (ACL) for the database or the OpenVMS SYSPRV or
BYPASS privilege.
o You receive a file access error message regarding the
database's .aij file if you issue the RMU Dump After_Journal
command with the active .aij file when there are active
processes updating the database. To avoid the file access
error message, use the RMU Close command to close the database
(which stops entries to the .aij file), then issue the RMU
Dump After_Journal command.
o See the Oracle Rdb Guide to Database Maintenance for
information on the steps Oracle RMU follows for tape label
checking when you execute an RMU Dump After_Journal command
using magnetic tapes.
o Use of the wrong value for the Format qualifier typically
results in a failure, but sometimes may produce unintelligible
results.
o The RMU Dump After_Journal command does not validate the file
being dumped. If the file is not an .aij file or a backup
of an .aij file, the RMU Dump After_Journal command produces
unintelligible output.
14.2.6 – Examples
Example 1
The following command generates a list of records associated with
unresolved transactions in the .aij file:
$ RMU/DUMP/AFTER_JOURNAL/STATE=PREPARED PERSONNEL.AIJ
Example 2
The following example shows the value to specify with the Aij_
Buffers qualifier along with information on how frequently each
page is accessed. The output from this example shows that you
should specify the Aij_Buffers=29 qualifier when you recover
aij_one.aij. In addition, it shows that pages (1:623-625) were
referenced 37 times which means that 8.9% of all data records in
the dumped after-image journal file reference this page.
$ RMU/DUMP/AFTER_JOURNAL/OPTION=STATISTICS aij_one.aij
.
.
.
Use "/AIJ_BUFFERS=29" when recovering this AIJ journal
1 recovery buffer referenced 37 times (1:623-625): 8.9%
1 recovery buffer referenced 23 times (4:23-25): 5.5%
1 recovery buffer referenced 22 times (4:5-7): 5.3%
1 recovery buffer referenced 21 times (4:44-46): 5.0%
1 recovery buffer referenced 20 times (4:50-52): 4.8%
1 recovery buffer referenced 19 times (4:41-43): 4.6%
2 recovery buffers referenced 18 times (4:38-40): 8.7%
1 recovery buffer referenced 17 times (4:17-19): 4.1%
1 recovery buffer referenced 16 times (4:29-31): 3.8%
2 recovery buffers referenced 15 times (4:35-37): 7.2%
1 recovery buffer referenced 14 times (4:2-4): 3.3%
2 recovery buffers referenced 13 times (4:11-13): 6.3%
3 recovery buffers referenced 12 times (4:8-10): 8.7%
2 recovery buffers referenced 11 times (5:2-4): 5.3%
4 recovery buffers referenced 10 times (4:14-16): 9.7%
1 recovery buffer referenced 9 times (4:47-49): 2.1%
2 recovery buffers referenced 8 times (1:617-619): 3.8%
1 recovery buffer referenced 6 times (4:20-22): 1.4%
1 recovery buffer referenced 2 times (1:503-505): 0.4%
Journal effectiveness: 97.3%
175 data records
412 data modification records
423 total modification records
2 commit records
3 rollback records
See the Oracle Rdb Guide to Database Maintenance and the Oracle
Rdb7 Guide to Distributed Transactions for more examples of the
RMU Dump After_Journal command.
Example 3
The following example shows how to start a dump from Block 100 or
TSN 52, whichever occurs first.
$ RMU/DUMP/AFTER_JOURNAL /FIRST=(BLOCK=100,TSN=0:52) mf_personnel.aij
Example 4
This example shows how to dump committed records only.
$ RMU/DUMP/AFTER_JOURNAL /ONLY=(TYPE=COMMIT) mf_personnel.aij
Example 5
This example shows the dump output when you specify an area, a
page, and a line.
RMU/DUMP/AFTER_JOURNAL/AREA=3/PAGE=560/LINE=1 mf_personnel.aij
*-----------------------------------------------------------------------------
* Oracle Rdb X7.1-00 3-NOV-2005
10:42:23.56
*
* Dump of After Image Journal
* Filename: DEVICE:[DIRECTORY]MF_PERSONNEL.AIJ;1
*
*-----------------------------------------------------------------------------
2/4 TYPE=D, LENGTH=122, TAD= 3-NOV-2005 10:31:12.56, CSM=00
TID=6, TSN=0:640, AIJBL_START_FLG=01, FLUSH=00, SEQUENCE=1
MODIFY: PDBK=3:560:1, LDBID=0, PSN=0, FLAGS=00, LENGTH=84
0022 0000 line 1 (3:560:1) record type 34
00 0001 0002 Control information
.... 79 bytes of static data
86726576696C6F54343631303000010D 0005 data '...00164Toliver.'
5020363431411120846E69766C410420 0015 data ' .Alvin. .A146 P'
009820876563616C50206C6C656E7261 0025 data 'arnell Place. ..'
3330484E12208B6175726F636F684307 0035 data '.Chocorua. .NH03'
20F03100630F72B31C00004D373138 0045 data '817M...³r.c.1ð '
2/6 TYPE=D, LENGTH=224, TAD= 3-NOV-2005 10:31:12.56, CSM=00
TID=6, TSN=0:641, AIJBL_START_FLG=01, FLUSH=00, SEQUENCE=3
MODIFY: PDBK=3:560:1, LDBID=0, PSN=1, FLAGS=00, LENGTH=84
0022 0000 line 1 (3:560:1) record type 34
00 0001 0002 Control information
.... 79 bytes of static data
86726576696C6F54343631303000010D 0005 data '...00164Toliver.'
5020363431411120846E69766C410420 0015 data ' .Alvin. .A146 P'
009820876563616C50206C6C656E7261 0025 data 'arnell Place. ..'
3330484E12208B6175726F636F684307 0035 data '.Chocorua. .NH03'
20F03100630F72B31C00004D373138 0045 data '817M...³r.c.1ð '
3/9 TYPE=D, LENGTH=330, TAD= 3-NOV-2005 10:31:12.73, CSM=00
TID=6, TSN=0:642, AIJBL_START_FLG=01, FLUSH=00, SEQUENCE=5
MODIFY: PDBK=3:560:1, LDBID=0, PSN=2, FLAGS=00, LENGTH=84
0022 0000 line 1 (3:560:1) record type 34
00 0001 0002 Control information
.... 79 bytes of static data
86726576696C6F54343631303000010D 0005 data '...00164Toliver.'
5020363431411120846E69766C410420 0015 data ' .Alvin. .A146 P'
009820876563616C50206C6C656E7261 0025 data 'arnell Place. ..'
3330484E12208B6175726F636F684307 0035 data '.Chocorua. .NH03'
20F03100630F72B31C00004D373138 0045 data '817M...³r.c.1ð '
Use "/AIJ_BUFFERS=3" when recovering this AIJ journal.
Make sure you have enough working set and pagefile quota
for the recommended number of buffers.
1 recovery buffer referenced 3 times (3:559-561): 50.0%
1 recovery buffer referenced 2 times (3:436-438): 33.3%
1 recovery buffer referenced 1 time (3:134-136): 16.6%
Journal effectiveness: 54.5%
3 data records
6 data modification records
11 total modification records
3 commit records
14.3 – Backup File
Displays or writes to a specified output file the contents of a
backup file. Use this command to examine the contents of a backup
(.rbf) file created by the RMU Backup command.
14.3.1 – Description
The RMU Dump Backup_File command reads an .rbf file and displays
the contents. It uses an .rbf file, not a database file, as its
parameter, and is a separate command from the RMU Dump command.
The output captures unrecoverable media errors and indicates if
there are unknown backup blocks on tape. This command can can
be used to confirm that a backup file is formatted correctly and
that the media is readable for the RMU Restore command.
NOTE
Successful completion of this command does not guarantee
that data in a backup file is uncorrupt, nor that the backup
file is complete, nor that a restore operation will succeed.
Use the Root, Full, or Debug option to the Option qualifier to
dump the database backup header information. The database backup
header information includes the name of the backup file and
the "Backup file database version". The "Backup file database
version" is the version of Oracle Rdb that was executing at
the time the backup file was created. The "Oracle Rdb structure
level" listed in the section entitled "Database Parameters" is
the currently executing version of Oracle Rdb.
The backup header information is contained on the first volume of
a database backup file on tape.
14.3.2 – Format
(B)0[mRMU/Dump/Backup_File backup-file-name
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Active_IO=max-reads x /Active_IO=3
/Area=identity x None
/Disk_File=[(Reader_Threads=n)] x /Disk_file=(Reader_Threads=1)
/Encrypt=({Value=|Name=}[,Algorithm=]) x See description
/End=integer x See description
/Journal=file-name x See description
/Label=(label-name-list) x See description
/Librarian[=options] x None
/[No]Media_Loader x See description
/Options=options-list x See description
/Output=file-name x /Output=SYS$OUTPUT
/Process=process-list x See description
/Prompt={Automatic|Operator|Client} x See description
/Restore_Options=file-name x None
/[No]Rewind x /Norewind
/Skip=skip-list x See description
/Start=integer x See description
14.3.3 – Parameters
14.3.3.1 – backup-file-spec
A file specification for the backup file. The default file type
is .rbf.
If you use multiple tape drives, the backup-file-spec parameter
must include the tape device specifications. Separate the device
specifications with commas. For example:
$ RMU/DUMP/BACKUP_FILE $111$MUA0:PERS_FULL.rbf,$112$MUA1: -
_$ /LABEL=BACK01
When multiple volume tape files are processed, Oracle RMU
dismounts and unloads all but the last volume containing the
file, which is the customary practice for multiple volume tape
files. See the Oracle Rdb Guide to Database Maintenance for more
information on using multiple tape drives.
14.3.4 – Command Qualifiers
14.3.4.1 – Active IO
Active_IO=max-reads
Specifies the maximum number of read operations from the
backup file that the RMU Dump Backup_File command will attempt
simultaneously. The value of the Active_IO qualifier can range
from 1 to 5. The default value is 3. Values larger than 3 might
improve performance with multiple tape drives.
14.3.4.2 – Area
Area=identity
Only dump the storage area identified by the specified name or
ID number. The area name must be the name of a storage area in
the database root file and the area ID number must be a storage
area ID number in the database root file. This information is
contained in the "Database Parameters:" section of the backup
file which is output at the start of the dump. Snapshot areas are
not contained in the backup file and cannot be specified. If this
qualifier is used without the /START and /END qualifiers, all
page records in the specified storage area will be output.
14.3.4.3 – Disk File
Disk_File=[(Reader_Threads=integer)]
Specifies that you want to dump a multiple disk backup file. This
is a backup file that was created by the RMU Backup command with
the Disk_File qualifier.
The Reader_Threads keyword specifies the number of threads that
Oracle RMU should use when performing a multithreaded read
operation from disk files. You can specify no more than one
reader thread per device specified on the command line (or in the
command parameter options file). By default, one reader thread is
used.
This qualifier and all qualifiers that control tape operations
(Label, Media_Loader, and Rewind) are mutually exclusive.
14.3.4.4 – Encrypt
Encrypt=({Value=|Name=}[,Algorithm=])
Specify a key value as a string or, the name of a predefined
key. If no algorithm name is specified the default is DESCBC.
For details on the Value, Name and Algorithm parameters see HELP
ENCRYPT.
This feature requires the OpenVMS Encrypt product to be installed
and licensed on this system.
14.3.4.5 – End
End=integer
Only dump pages ending with the specified page number in the
specified storage area. This qualifier cannot be used unless
the /AREA qualifier is also specified. If no pages are dumped,
either the specified page or range of pages does not exist in
the specified area in the backup file, or this qualifier has been
used in the same RMU/DUMP/BACKUP command as an /OPTIONS, /SKIP or
/PROCESS qualifier option that has excluded the specified page or
range of pages from the dump. If this qualifier is not used with
the /START qualifier, all page records in the specified storage
area ending with the specified page number will be output.
If both the /START and /END qualifiers are specified, the
starting page number must be less than or equal to the ending
page number. If the starting page number equals the ending page
number only the page records for the specified page number are
dumped. The block header for each block which contains at least
one of the requested pages is dumped followed by the requested
page records in that block. The START AREA record is dumped at
the start of requested page records and the END AREA record is
dumped at the end of the requested page records. By default, the
database root parameters are dumped at the very start following
the dump header.
14.3.4.6 – Journal
Journal=file-name
Allows you improve tape performance by the dump backup file
operation by specifying the journal file created by the RMU
Backup command with the Journal qualifier.
The RMU Backup command with the Journal qualifier creates the
journal file and writes to it a description of the backup
operation, including identification of the tape volumes, their
contents, and the tape drive name.
The RMU Dump Backup File with the Journal qualifier directs
the RMU Dump Backup_File command to read the journal file
and identify the tape volumes when the Label qualifier is not
specified.
The journal file must be the one created at the time the backup
operation was performed. If the wrong journal file is supplied,
an informational message is generated, and the specified journal
file is not used to identify the volumes to be processed.
14.3.4.7 – Label
Label=(label-name-list)
Specifies the 1- to 6-character string with which the volumes
of the backup file have been labeled. The Label qualifier is
applicable only to tape volumes. You must specify one or more
label names when you use the Label qualifier.
You can specify a list of tape labels for multiple tapes. If you
list multiple tape label names, separate the names with commas,
and enclose the list of names within parentheses.
In a normal dump backup operation, the Label qualifier you
specify with the RMU Dump Backup_File command should be the same
Label qualifier as you specified with the RMU Backup command that
backed up your database.
If no label is specified, the system will internally generate one
consisting of the first six characters in the backup-file-spec
parameter.
See the Oracle Rdb Guide to Database Maintenance for information
on tape label processing.
The Label qualifier can be used with indirect file references.
See Indirect-Command-Files for more information.
14.3.4.8 – Librarian
Librarian=options
Use the Librarian qualifier to restore files from data archiving
software applications that support the Oracle Media Management
interface. The file name specified on the command line identifies
the stream of data to be retrieved from the Librarian utility. If
you supply a device specification or a version number it will be
ignored.
Oracle RMU supports retrieval using the Librarian qualifier only
for data that has been previously stored by Oracle RMU using the
Librarian qualifer.
The Librarian qualifier accepts the following options:
o Reader_Threads=n
Use the Reader_Threads option to specify the number of backup
data streams to read from the Librarian utility. The value of
n can be from 1 to 99. The default is one reader thread. The
streams are named BACKUP_FILENAME.EXT, BACKUP_FILENAME.EXT02,
BACKUP_FILENAME.EXT03, up to BACKUP_FILENAME.EXT99. BACKUP_
FILENAME.EXT is the backup file name specified in the RMU
Backup command.
The number of reader threads specified for a database restore
from the Librarian utility should be equal to or less than the
number of writer threads specified for the database backup.
If the number of reader threads exceeds the number of writer
threads, the number of reader threads is set by Oracle RMU
to be equal to the number of data streams actually stored
in the Librarian utility by the backup. If the number of
reader threads specified for the restore is less than the
number of writer threads specified for the backup, Oracle RMU
will partition the data streams among the specified reader
threads so that all data streams representing the database are
restored.
The Volumes qualifier cannot be used with the Librarian
qualifer. Oracle RMU sets the volume number to be the actual
number of data streams stored in the specified Librarian
utility.
o Trace_file=file-specification
The Librarian utility writes trace data to the specified file.
o Level_Trace=n
Use this option as a debugging tool to specify the level of
trace data written by the Librarian utility. You can use a
pre-determined value of 0, 1, or 2, or a higher value defined
by the Librarian utility. The pre-determined values are :
- Level 0 traces all error conditions. This is the default.
- Level 1 traces the entry and exit from each Librarian
function.
- Level 2 traces the entry and exit from each Librarian
function, the value of all function parameters, and the
first 32 bytes of each read/write buffer, in hexadecimal.
o Logical_Names=(logical_name=equivalence-value,...)
You can use this option to specify a list of process logical
names that the Librarian utility can use to specify catalogs
or archives where Oracle Rdb backup files are stored,
Librarian debug logical names, and so on. See the specific
Librarian documentation for the definition of logical names.
The list of process logical names is defined by Oracle RMU
prior to the start of any Oracle RMU command that accesses the
Librarian utility.
The following OpenVMS logical names must be defined for use with
a Librarian utility before you execute an Oracle RMU backup or
restore operation. Do not use the Logical_Names option provided
with the Librarian qualifier to define these logical names.
o RMU$LIBRARIAN_PATH
This logical name must be defined so that the shareable
Librarian image can be loaded and called by Oracle RMU backup
and restore operations. The translation must include the file
type (for example, .exe), and must not include a version
number. The shareable Librarian image must be an installed
(known) image. See the Librarian implementation documentation
for the name and location of this image and how it should be
installed. For a parallel RMU backup, define RMU$LIBRARIAN_
PATH as a system-wide logical name so that the multiple
processes created by a parallel backup can all translate the
logical.
$ DEFINE /SYSTEM /EXECUTIVE_MODE -
_$ RMU$LIBRARIAN_PATH librarian_shareable_image.exe
o RMU$DEBUG_SBT
This logical name is not required. If it is defined, Oracle
RMU will display debug tracing information messages from
modules that make calls to the Librarian shareable image.
For a parallel RMU backup, the RMU$DEBUG_SBT logical should
be defined as a system logical so that the multiple processes
created by a parallel backup can all translate the logical.
The following lines are from a backup plan file created by the
RMU Backup/Parallel/Librarian command:
Backup File = MF_PERSONNEL.RBF
Style = Librarian
Librarian_trace_level = #
Librarian_logical_names = (-
logical_name_1=equivalence_value_1, -
logical_name_2=equivalence_value_2)
Writer_threads = #
The "Style = Librarian" entry specifies that the backup is going
to a Librarian utility. The "Librarian_logical_names" entry is
a list of logical names and their equivalence values. This is an
optional parameter provided so that any logical names used by a
particular Librarian utility can be defined as process logical
names before the backup or restore operation begins. For example,
some Librarian utilities provide support for logical names for
specifying catalogs or debugging.
You cannot use device specific qualifiers such as Rewind,
Density, or Label with the Librarian qualifier because the
Librarian utility handles the storage meda, not Oracle RMU.
14.3.4.9 – Media Loader
Media_Loader
Nomedia_Loader
Use the Media_Loader qualifier to specify that the tape device
from which the backup file is being read has a loader or stacker.
Use the Nomedia_Loader qualifier to specify that the tape device
does not have a loader or stacker.
By default, if a tape device has a loader or stacker, Oracle
RMU should recognize this fact. However, occasionally Oracle RMU
does not recognize that a tape device has a loader or stacker.
Therefore, when the first tape has been read, Oracle RMU issues a
request to the operator for the next tape, instead of requesting
the next tape from the loader or stacker. Similarly, sometimes
Oracle RMU behaves as though a tape device has a loader or
stacker when actually it does not.
If you find that Oracle RMU is not recognizing that your tape
device has a loader or stacker, specify the Media_Loader
qualifier. If you find that Oracle RMU expects a loader or
stacker when it should not, specify the Nomedia_Loader qualifier.
14.3.4.10 – Options
Options=options-list
Specifies the type of information and level of detail the output
will include. If you do not specify the Options qualifier or if
you specify the Options=Normal qualifier, the backup file will
be read, but dump output is not generated. This is useful for
confirming that the backup file is structured correctly and
the media is readable for the RMU Restore command. However,
this command does not indicate if the data in a backup file is
corrupted, nor does it guarantee that a restore operation will
succeed.
If you specify more than one option, you must separate the
options with a comma, and enclose the options-list parameter
within parentheses. Eight types of output are available:
o Records
Dumps the backup file record structure.
o Blocks
Dumps the backup file block structure.
o Data
The Data option can be used with either the Records option,
the Blocks option, or both. When specified with the Records
and Blocks options, the Data option dumps the contents of the
backup file's records and blocks. When you do not specify the
Data option, the Records and Blocks options dump the backup
file's record structure and block structure only, not their
contents.
o Journal
Dumps the contents of the journal file.
Use the Journal option of the RMU Dump Backup_File command to
direct Oracle RMU to dump the journal file created with the
RMU Backup command with the Journal qualifier. The RMU Backup
command with the Journal qualifier creates a journal file
to which it writes a description of the backup operation,
including identification of the tape volumes and their
contents. You can use the output of the RMU Dump Backup_File
with the Journal qualifier to identify the contents of each of
the tapes that comprises the backup file.
o Root
Dumps the database root file contents as recorded in the
backup file. This includes a dump of the database backup
header information.
o Normal
The backup file will be read, but no dump output is generated.
This is useful to verify the integrity of the backup file
format and to detect media errors.
o Full
Specifying the Full option is the same as specifying the Root,
Records, and Blocks options. Includes a dump of the database
backup header information. The contents of the backup file's
record structure and block structure are not dumped when the
Full option is specified.
o Debug
Specifying the Debug option is the same as specifying the
Root, Records, Blocks, Full, and Data options. The contents
of the backup file's header, record structure, and block
structure are dumped when the Debug option is specified.
14.3.4.11 – Output
Output=file-name
Specifies the name of the file where output will be sent. The
default is SYS$OUTPUT. The default output file type is .lis, if
you specify a file name.
14.3.4.12 – Process
Process=process-list
Specifies a list of keywords that determines how much of the
backup file is to be dumped. If you specify more than one type
of process-list option, separate the options with a comma, and
enclose the process-list parameter within parentheses. You can
specify the following three items in the process-list parameter:
o Volumes=integer
The number of volumes to dump, starting at the position
specified in the Skip qualifier for volumes. This option is
ignored if the backup file does not reside on tape.
o Blocks=integer
The number of blocks to dump, starting at the position
specified in the Skip qualifier for blocks. This option is
ignored if the backup file does not reside on tape.
o Records=integer
The number of records to dump, starting at the position
specified in the Skip qualifier for records. This option is
valid regardless of whether the backup file resides on tape or
disk.
14.3.4.13 – Prompt
Prompt=Automatic
Prompt=Operator
Prompt=Client
Specifies where server prompts are to be sent. When you specify
Prompt=Automatic, prompts are sent to the standard input device,
and when you specify Prompt=Operator, prompts are sent to the
server console. When you specify Prompt=Client, prompts are sent
to the client system.
14.3.4.14 – Restore Options
Restore_Options=file-name
Generates an options file designed to be used with the Options
qualifier of the RMU Restore command.
The Restore_Options file is created after the root information
has been read from the backup file.
By default, a Restore_Options file is not created. If you
specify the Restore_Options qualifier and a file, but not a file
extension, Oracle RMU uses an extension of .opt by default.
14.3.4.15 – Rewind
Rewind
Norewind
Specifies that the magnetic tape that contains the backup file
will be rewound before processing begins. The Norewind qualifier
is the default.
The Rewind and Norewind qualifiers are applicable only to tape
devices. You should use these qualifiers only when the target
device is a tape device.
See the Oracle Rdb Guide to Database Maintenance for information
on tape label processing.
14.3.4.16 – Skip
Skip=skip-list
Specifies a list of keywords that determines where the output
display begins. The keywords indicate the position in the backup
file from which to start the dump. If you specify more than one
type of Skip position, separate the options with a comma, and
enclose the skip-list parameter in parentheses. You can specify
the following three items in the skip-list parameter:
o Volumes=integer
The number of volumes to ignore before starting. This option
is ignored if the backup file does not reside on tape.
o Blocks=integer
The number of blocks to ignore before starting. This option is
ignored if the backup file does not reside on tape.
o Records=integer
The number of records to ignore before starting. This option
is valid regardless of whether the backup file resides on tape
or disk.
14.3.4.17 – Start
Start=integer
Only dump pages starting with the specified page number in the
specified storage area. This qualifier cannot be used unless
the /AREA qualifier is also specified. If no pages are dumped,
either the specified page or range of pages does not exist in
the specified area in the backup file, or this qualifier has been
used in the same RMU/DUMP/BACKUP command as an /OPTIONS, /SKIP or
/PROCESS qualifier option that has excluded the specified page or
range of pages from the dump. If this qualifier is not used with
the /END qualifier, all page records in the specified storage
area starting with the specified page number will be output.
If both the /START and /END qualifiers are specified, the
starting page number must be less than or equal to the ending
page number. If the starting page number equals the ending page
number only the page records for the specified page number are
dumped. The block header for each block which contains at least
one of the requested pages is dumped followed by the requested
page records in that block. The START AREA record is dumped at
the start of requested page records and the END AREA record is
dumped at the end of the requested page records. By default, the
database root parameters are dumped at the very start following
the dump header.
14.3.5 – Usage Notes
o To use the RMU Dump Backup_File command for a database, you
must have the RMU$DUMP, RMU$BACKUP, or RMU$RESTORE privileges
in the root file access control list (ACL) for the database or
the OpenVMS BYPASS privilege.
You must also have read access to the .rbf file.
o If you do not specify the Options qualifier or if you specify
the Options=Normal qualifier, the backup file will be read,
but dump output will not be generated. This is useful to
verify the backup file integrity and to detect media errors.
o See the Oracle Rdb Guide to Database Maintenance for examples
that show the RMU Dump Backup_File command.
14.3.6 – Examples
Example 1
The following commands show the use of the Journal qualifier
with the RMU Backup command and the RMU Dump After_Journal
command. The first command creates a binary journal file that
identifies the tapes used in the backup operation. The second
command directs Oracle RMU to read the backup file (using the
tapes identified in the BACKUP_JOURNAL.JNL file) to confirm that
the backup file is structured correctly and the media is readable
for the RMU Restore command. No dump output is generated because
the Option qualifier is not specified.
$ RMU/BACKUP MF_PERSONNEL.RDB -
_$ $222$DUA20:[BCK]MF_PERSONNEL.RBF/LOG/JOURNAL=BACKUP_JOURNAL.JNL
$ RMU/DUMP/BACKUP_FILE $222$DUA20:[BCK]MF_PERSONNEL.RBF -
_$ /JOURNAL=BACKUP_JOURNAL.JNL
Example 2
The following commands show the use of the Journal qualifier with
the RMU Backup command and then with the RMU Dump Backup command.
The first command creates a binary journal file that identifies
the tapes used in the backup operation. The second command dumps
the binary journal file created in the first command in ASCII
format.
$ RMU/BACKUP MF_PERSONNEL.RDB -
_$ $222$DUA20:[BCK]MF_PERSONNEL.RBF/LOG/JOURNAL=BACKUP_JOURNAL.JNL
$ RMU/DUMP/BACKUP_FILE $222$DUA20:[BCK]MF_PERSONNEL.RBF -
_$ /JOURNAL=BACKUP_JOURNAL.JNL/OPTION=JOURNAL
Example 3
The following example demonstrates the use of the Restore_Options
qualifier. The first command performs a dump operation on the
backup file of the mf_personnel database and creates a Restore_
Options file. The second command shows a portion of the contents
of the options file. The last command demonstrates the use of the
options file with the RMU Restore command.
$ RMU/DUMP/BACKUP MFP.RBF /RESTORE_OPTIONS=MFP.OPT -
_$ /OPTIONS=NORMAL/OUTPUT=DUMP.LIS
$ TYPE MFP.OPT
! Options file for database DISK1:[DB]MF_PERSONNEL.RDB;1
! Created 17-OCT-1995 13:09:57.56
! Created by DUMP BACKUP command
RDB$SYSTEM -
/file=DISK2:[RDA]MF_PERS_DEFAULT.RDA;1 -
/extension=ENABLED -
/read_write -
/spams -
/snapshot=(allocation=248, -
file=DISK3:[SNAP]MF_PERS_DEFAULT.SNP;1)
EMPIDS_LOW -
/file=DISK3:[RDA]EMPIDS_LOW.RDA;1 -
/blocks_per_page=2 -
/extension=ENABLED -
/read_write -
/spams -
/thresholds=(70,85,95) -
/snapshot=(allocation=10, -
file=DISK4:[SNAP]EMPIDS_LOW.SNP;1)
.
.
.
$ RMU/RESTORE MFP.RBF/OPTIONS=MFP.OPT
Example 4
The following example shows the dump of the page records for page
10 in storage area 4 in the MFP.RBF backup file. Since the /START
and /END qualifiers both specify page 10, only the page records
for that page are dumped. At the start of the dump is the dump
header, followed by the database root parameters which are not
shown to save space, followed by the block header, which begins
with the "HEADER_SIZE" field, for the block which contains the
records for page 10 in storage area 4, followed by the start area
record for area 4 (REC_TYPE = 6), the data page header record
(REC_TYPE = 7) for page 10, the data page data record (REC_TYPE
(REC_TYPE = 11) which ends the dump.
$ RMU/DUMP/BACKUP/AREA=4/START=10/END=10/OPTION=FULL MFP.RBF
*------------------------------------------------------------------------------
* Oracle Rdb V7.2-420 11-JAN-2011 15:50:09.25
*
* Dump of Database Backup Header
* Backup filename: MFP.RBF
* Backup file database version: 7.2
*
*------------------------------------------------------------------------------
Database Parameters:
.
.
.
HEADER_SIZE = 80 OS_ID = 1024 UTILITY_ID = 722
APPLICATION_TYPE = 1 SEQUENCE_NUMBER = 22 MAJ_VER = 1 MIN_VER = 1
VOL_NUMBER = 1 BLOCK_SIZE = 32256 CRC = 0C5D3A78 NOCRC = 00
CRC_ALTERNATE = 00 BACKUP_NAME = MFP.RBF AREA_ID = 4 HIGH_PNO = 259
LOW_PNO = 1 HDR_CHECKSUM = 9B3D
REC_SIZE = 2 REC_TYPE = 6 BADDATA = 00 ROOT = 00
AREA_ID = 4 LAREA_ID = 0 PNO = 0
REC_SIZE = 32 REC_TYPE = 7 BADDATA = 00 ROOT = 00
AREA_ID = 4 LAREA_ID = 0 PNO = 10
REC_SIZE = 28 REC_TYPE = 8 BADDATA = 00 ROOT = 00
AREA_ID = 4 LAREA_ID = 0 PNO = 10
REC_SIZE = 512 REC_TYPE = 11 BADDATA = 00 ROOT = 00
AREA_ID = 4 LAREA_ID = 0 PNO = 0
Example 5
The following example dumps the records for pages 10, 11 and
12 in the RDB$SYSTEM storage area in the MFP.RBF backup file.
Following the block header containing the target records that
starts with "HEADER_SIZE =", are the start area record for
RDB$SYSTEM area 1 (REC_TYPE = 6), then the target ABM page
records for pages 10, 11, and 12 (REC_TYPE = 10), and finally
the end area record for area RDB$SYSTEM area 1 (REC_TYPE = 11)
which ends the dump.
$ RMU/DUMP/BACKUP/AREA=RDB$SYSTEM/START=10/END=12/OPTION=FULL MFP.RBF
*------------------------------------------------------------------------------
* Oracle Rdb V7.2-420 14-JAN-2011 14:40:46.88
*
* Dump of Database Backup Header
* Backup filename: MFP.RBF
* Backup file database version: 7.2
*
*------------------------------------------------------------------------------
Database Parameters:
.
.
.
HEADER_SIZE = 80 OS_ID = 1024 UTILITY_ID = 722
APPLICATION_TYPE = 1 SEQUENCE_NUMBER = 1 MAJ_VER = 1 MIN_VER = 1
VOL_NUMBER = 1 BLOCK_SIZE = 32256 CRC = 8329C24B NOCRC = 00
CRC_ALTERNATE = 00 BACKUP_NAME = MFP.RBF AREA_ID = 1 HIGH_PNO = 178
LOW_PNO = 1 HDR_CHECKSUM = 40DE
REC_SIZE = 2 REC_TYPE = 6 BADDATA = 00 ROOT = 00 AREA_ID = 1
LAREA_ID = 0 PNO = 0
REC_SIZE = 10 REC_TYPE = 10 BADDATA = 00 ROOT = 00 AREA_ID = 1
LAREA_ID = 3 PNO = 10
REC_SIZE = 10 REC_TYPE = 10 BADDATA = 00 ROOT = 00 AREA_ID = 1
LAREA_ID = 4 PNO = 11
REC_SIZE = 10 REC_TYPE = 10 BADDATA = 00 ROOT = 00 AREA_ID = 1
LAREA_ID = 4 PNO = 12
REC_SIZE = 512 REC_TYPE = 11 BADDATA = 00 ROOT = 00 AREA_ID = 1
LAREA_ID = 0 PNO = 0
14.4 – Export
Displays the contents of an export interchange (.rbr) file or a
formatted .unl file created by the RMU Unload command. This is a
useful debugging tool.
14.4.1 – Format
(B)0[mRMU/Dump/Export export_file
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
/[No]Data x /Data
/[No]Options[=options-list] x /Nooptions
/Output=file-name x /Output=SYS$OUTPUT
14.4.2 – Parameters
14.4.2.1 – export-file
The .rbr file or formatted .unl file to be displayed.
14.4.3 – Command Qualifiers
14.4.3.1 – Data
Data
Nodata
The Data qualifier specifies that the contents of segmented
strings and tables are to be displayed in hexadecimal format
along with the ASCII translation. Specifying the Nodata qualifier
excludes the contents of segmented strings and tables from the
display and generates much less output.
The default is the Data qualifier.
14.4.3.2 – Options=option-list
Options=option-list
The Options qualifier allows the user to modify the output from
the RMU Dump Export command.
If you specify more than one option, you must separate the
options with a comma and enclose the options-list parameter
within parentheses.
- ALLOCATION
When importing databases for testing, the full allocation
recorded in the interchange file is often not required. The
clauses ALLOCATION and SNAPSHOT ALLOCATION are controlled by
this option. The default is ALLOCATION. Use NOALLOCATION to
omit these clauses from the generated SQL script. This option
is ignored if NOIMPORT_DATABASE is specified or defaulted for
the OPTIONS qualifier.
- FILENAME_ONLY
When importing databases for testing, the full file
specification for the database root, storage areas and
snapshot areas recorded in the interchange file is often
not required. The FILENAME clauses are controlled by this
option which trims the specification to only the filename
portion. The default is NOFILENAME_ONLY. Use FILENAME_ONLY to
truncate the file specification in the generated SQL script.
This option is ignored if NOIMPORT_DATABASE is specified or
defaulted for the OPTIONS qualifier.
- HEADER_SECTION
This option allows the database administrator to display just
the header portion of the interchange file and avoid dumping
the data or metadata for every row in the table.
- IMPORT_DATABASE
This keyword requests that the output from RMU Dump Export
be formatted as a SQL IMPORT DATABASE statement. It uses
the database attributes present in the interchange file
formatted as SQL clauses. Of particular interest are the
CREATE STORAGE AREA clauses which are required to IMPORT the
source interchange (.rbr) file.
The keyword HEADER_SECTION is implicitly selected when IMPORT_
DATABASE is used, limiting the I/O to the interchange file to
the section containing the database attributes.
The default is NOIMPORT_DATABASE.
14.4.3.3 – Output
Output=file-name
Specifies the name of the file where output is sent. The default
is SYS$OUTPUT. The default output file type is .lis, if you
specify a file name.
14.4.4 – Usage Notes
o You do not need Oracle RMU privileges to use the RMU Dump
Export command. However, you must have OpenVMS read access to
the .rbr or .unl file, or OpenVMS BYPASS privilege.
o If the source interchange file is created by RMU Unload, then
it does not contain any IMPORT DATABASE information and the
generated SQL script cannot be used to create a database from
such an interchange file.
$ RMU/DUMP/EXPORT/OPTION=IMPORT_DATABASE EMPLOYEES.UNL/OUT=EMP.SQL
$ SQL$ @EMP.SQL
SQL> IMPORT DATABASE
cont> from 'DISK1:[TESTING]EMPLOYEES.UNL;1'
cont> -- ident ' Load/Unload utility'
cont> -- backup file version 4
cont> -- database ident 'Oracle Rdb V7.2-131'
cont> filename 'DB$:MF_PERSONNEL'
cont> ;
%SQL-F-EXTRADATA, unexpected data at the end of the .RBR file
$
o The IMPORT_DATABASE option is intended to create a SQL script
as an aid to the database administrator. Some editing of the
generated script may be required under some circumstances.
Only a subset of the database attributes are dumped by RMU
for the IMPORT_DATABASE output. Continue to use the RMU Dump
Export Option=NOIMPORT_DATABASE to see all attributes recorded
in the interchange file.
14.4.5 – Examples
Example 1
The following is an example of the RMU Dump Export command using
the default qualifiers:
$ RMU/DUMP/EXPORT EMPLOYEES.UNL
Example 2
The following is an example of how to use the HEADER_SECTION
option to display just the header portion of the interchange
file, and avoid dumping the data or metadata for every row in the
table.
$ RMU/DUMP/EXPORT/OPTION=HEADER JOBS.UNL
BEGIN HEADER SECTION - (0)
NONCORE_TEXT HDR_BRP_ID - (20) : Load/Unload utility
CORE_NUMERIC HDR_BRPFILE_VERSION - (1) : 4
NONCORE_TEXT HDR_DBS_ID - (18) : Oracle Rdb V7.2-10
NONCORE_TEXT HDR_DB_NAME - (16) : DB$:MF_PERSONNEL
NONCORE_DATE HDR_DB_LOG_BACKUP_DATE - (8) : 3-JUL-2006 16:52:32.83
CORE_NUMERIC HDR_DATA_COMPRESSION - (1) : 1
END HEADER SECTION - (0)
In this example, the output describes the creator of the
interchange file (RMU/UNLOAD), the version of Rdb used to create
the file, the file specification of the database used, the date
and time the interchange file was created, and an indication that
compression was used by RMU Unload.
14.5 – Recovery Journal
Displays a recovery-unit journal (.ruj) file in ASCII format. Use
this command to examine the contents of an .ruj file. You might
find .ruj files on your system following a system failure.
An .ruj file contains header information and data blocks. Header
information describes the data blocks, which contain copies of
data modified in the database file.
14.5.1 – Description
The RMU Dump Recovery_Journal command specifies an .ruj file, not
a database file, as its parameter, and is a separate command from
the RMU Dump command used to display database areas and header
information.
The .ruj file is in binary format. This command translates the
binary file into an ASCII display format.
14.5.2 – Format
(B)0[m RMU/Dump/Recovery_Journal ruj-file-name
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Data x /Data
/Output = file-name x /Output=SYS$OUTPUT
14.5.3 – Parameters
14.5.3.1 – ruj-file-name
The .ruj file. The default file type is .ruj.
14.5.4 – Command Qualifiers
14.5.4.1 – Data
Data
Nodata
Specifies whether you want to display data blocks of the .ruj
file or just the .ruj file header.
The Data qualifier is the default. It causes the display of the
.ruj file data blocks (in addition to the file header) in an
ASCII display format.
The Nodata qualifier limits the display to the file header of the
.ruj file.
14.5.4.2 – Output
Output=file-name
The name of the file where output will be sent. The default is
SYS$OUTPUT. The default output file type is .lis, if you specify
a file name.
14.5.5 – Usage Notes
o You do not need Oracle RMU privileges to use the RMU Dump
Recovery_Journal command. However, you must have OpenVMS READ
access to the .ruj file or OpenVMS BYPASS privilege to use the
RMU Dump Recovery_Journal command.
o The RMU Dump Recovery_Journal command does not validate the
file being dumped. If the file is not an .ruj file, the output
from the RMU Dump Recovery_Journal command is unintelligible.
o See the Oracle Rdb Guide to Database Maintenance for examples
showing the RMU Dump Recovery_Journal command.
14.6 – Row Cache
Allows you to display the in-memory contents of a row cache for
an open database.
14.6.1 – Description
The RMU Dump Row_Cache command is intended for use as a
diagnostic aid that allows you to display the in-memory contents
of a row cache for an open database. Use this command to display
the following information for each row in the specified cache:
o GRIC - Address of the GRIC data structure for the cache slot
o GRIB - Address of the GRIB data structure for the cache slot
o SLOT - Slot number within the cache
o NXTGRIC - Slot number of the next slot within the hash chain
o LHMTE - Flag values indicating:
- L - Row is latched
- H - Row is marked Hot (modified since last checkpoint or
sweep)
- M - Row is marked Modified
- T - Row is marked Too Big for (or removed from) the cache
- E - End of on-disk checkpoint file; should never be seen
with the RMU Dump Row_Cache command
o SNAPPNO - Snapshot pointer (either snapshot page number or
snapshot slot number
o LEN - Length of the row in cache; 0 indicates row has been
deleted
o ACTLEN - Actual length of allocated space on the database page
for the row
o DBK - Database key for the row
o REFCNT - Reference count: number of processes with this row in
a cache working set
o UPD_PID - Process ID of process currently updating the row in
memory
o RVNO - In-memory row modification count
o TSN - Transaction sequence number of last transaction to
modify the row
14.6.2 – Format
(B)0[mRMU/Dump/Row_Cache root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Cache_Name=cachename x None
/[No]Data x /Data
/Output=file-name x /Output=SYS$OUTPUT
14.6.3 – Parameters
14.6.3.1 – root-file-spec
Specifies the database root file for which you want to dump the
row_cache contents.
14.6.4 – Command Qualifiers
14.6.4.1 – Cache Name
Cache_Name=cachename
Secifies the name of the cache you want to dump. You must specify
the cache name.
14.6.4.2 – Data
Data
Nodata
The Data qualifier specifies that the in-memory content of a row_
cache is to be displayed in hexadecimal format along with the
ASCII translation. The Data qualifier is the default.
Specify the Nodata qualifier to display only header information
for each cache slot.
14.6.4.3 – Output
Output=filename
Specifies the name of the file where output is to be sent. The
default is SYS$OUTPUT. If you specify a file name, the default
output file type is .lis.
15 – Extract
Reads and decodes Oracle Rdb metadata and reconstructs equivalent
statements in Relational Database Operator (RDO) or SQL
(structured query language) code for the definition of that
database. These statements can either be displayed or extracted.
You can use these statements to create your database again if you
no longer have the RDO or SQL code that defined your database.
In addition, you can direct the RMU Extract command to produce
output for the following:
o An SQL or RDO IMPORT script (Items=Import)
o An RMU Unload command for each table (Items=Unload)
o An RMU Load command for each table (Items=Load)
o An RMU Set Audit command for the database (Items=Security)
o An RMU Verify command for the database (Items=Verify)
15.1 – Description
The RMU Extract command decodes information and reconstructs
equivalent commands in the language you select with the Language
qualifier for the definition of that database.
You can extract the definitions to either a file or to
SYS$OUTPUT.
The RMU Extract command extracts the following character set
information:
o For databases:
- The database default character set
- The national character set
o For domains:
- The character set of each character data type domain
- The length in characters of each character data type
domain
o For tables:
- The character set of each character data type column
- The length in characters of each character data type
column
The RMU Extract command may enclose object names in double
quotation marks to preserve uppercase and lowercase characters,
special character combinations, or reserved keywords.
15.2 – Format
(B)0[mRMU/Extract root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Defaults[=defaults-list] x /Defaults=(quoting_rules=SQL92)
/Items[=item-list] x /Items=All
/Language=lang-name x /Language=SQL
/[No]Log[=log-file] x /Nolog
/Options=options-list x /Option=Normal
/[No]Output[=out-file] x /Output=SYS$OUTPUT
/Transaction_Type[= x See Description
(transation_mode,options...]) x
15.3 – Parameters
15.3.1 – root-file-spec
The file specification for the database root file from which you
want to extract definitions. Note that you do not need to specify
the file extension. If the database root file is not found, the
command exits with a "file not found" error.
15.4 – Command Qualifiers
15.4.1 – Defaults
Defaults[=defaults-list]
This qualifier is used to change the output of the RMU Extract
command. The following defaults can be modified with the Defaults
qualifier:
o Allocation=integer
Noallocation
When you create a test database using the script generated
by the RMU Extract command, the allocation from the source
database may not be appropriate. You can use the Allocation
keyword to specify an alternate value to be used by all
storage areas, or you can use the Noallocation keyword to
omit the clause from the CREATE STORAGE MAP syntax. The
default behavior, when neither keyword is used, is to use
the allocation recorded in the database for each storage area.
See also the Snapshot_Allocation keyword.
o Date_Format
Nodate_Format
By default, the RMU Extract process assumes that DATE types
are SQL standard-compliant (that is DATE ANSI) and that the
built-in function CURRENT_TIMESTAMP returns TIMESTAMP(2)
values. If your environment uses DATE VMS exclusively, then
you can modify the default by specifying the default DATE_
FORMAT=VMS. The legal values are described in the Oracle Rdb
SQL Reference Manual in the SET DEFAULT DATE FORMAT section.
The default is Date_Format=SQL92.
Use Nodate_Format to omit the setting of this session
attribute from the script.
o Dialect
Nodialect
For some extracted SQL scripts the language dialect must
be specified. You can use the Dialect keyword to supply a
specified dialect for the script. You can find the legal
values for this option in the Oracle Rdb SQL Reference Manual
in the SET DIALECT section. The default is Nodialect.
o Language
Nolanguage
The RMU Extract commmand uses the process language, that is,
the translated value of SYS$LANGUAGE, or ENGLISH, for the
SET LANGUAGE command. However, if the script is used on a
different system then this language might not be appropriate.
You can use the Language keyword to supply a specified
language for the script. Legal language names are defined by
the OpenVMS system logical name table; examine the logical
name SYS$LANGUAGES for a current set. Use the Nolanguage
keyword to omit this command from the script.
o Quoting_Rules
Noquoting_Rules
You can use the Quoting_Rules keyword to supply a specified
setting for the script. You can find the legal values for
this option in the Oracle Rdb SQL Reference Manual in the SET
QUOTING RULES section. The default is Quoting_Rules=SQL92.
The RMU Extract command assumes that SQL keywords and names
containing non-ASCII character set values are enclosed in
quotation marks.
o Snapshot_Allocation=integer
Nosnapshot_Allocation
When you create a test database from the RMU Extract output,
the snapshot file allocation from the source database may not
be appropriate. You can use the Snapshot_Allocation keyword to
specify an alternate value to be used by all snapshot areas,
or you can use the Noallocation keyword to omit the "snapshot
allocation is" clause. The default behavior, when neither
keyword is used, is to use the snapshot allocation stored in
the database for each snapshot area. See also the Allocation
keyword.
15.4.2 – Items
Items[=item-list]
Allows you to extract and display selected definitions. Note that
each of the item names can be combined to provide shorter command
lines such as the following:
$ RMU/EXTRACT/NOLOG/ITEMS=(ALL,NODATABASE) MF_PERSONNEL
If you omit the Items qualifier from the command line or specify
it without any options, the action defaults to Items=All.
The following options can be specified with the Items qualifier:
o All
Indicates that all database items are to be extracted. This
is the default and includes all items except Alter_Database,
Forward_References, Import, Load, Protections, Revoke_Entry,
Security, Synonyms, Unload, Verify, Volume, and Workload
options. You can use either All or Noall in combination with
other items to select specific output.
In the following example, the Items=All option causes all the
definitions except for Triggers to be extracted and displayed:
$ RMU/EXTRACT/ITEMS=(ALL,NOTRIGGERS) MF_PERSONNEL
The following example displays domain and table definitions.
Note that the Noall option could have been omitted:
$ RMU/EXTRACT/ITEMS=(NOALL, DOMAIN, TABLE) MF_PERSONNEL
o Alter_Database (or Change_Database)
Noalter_Database
Displays the physical database after-image journal object
definition.
o Catalog
Nocatalog
Displays all contents of the catalog created for an SQL
multischema database. This item is ignored if the interface
is RDO.
o Collating_Sequences
Nocollating_Sequences
Displays all the collating sequences defined for the database
that you select. Note that Oracle Rdb does not save the name
of the source OpenVMS National Character Set (NCS) library and
the name becomes the defined logical, NCS$LIBRARY, by default.
o Constraints
Noconstraints
By default, table and column constraints are output by the
Items=Table qualifier. If you specify Item=Noconstraints,
constraint information is not extracted for any table. If you
specify the Language=SQL qualifier, the default is to have
Item=Constraints enabled when tables are extracted.
To extract all constraints as an ALTER TABLE statement, use
the Item=Constraint and Option=Defer_Constraints qualifiers.
To force all constraints to be defined after tables are
defined, use the Item=Tables and Option=Defer_Constraints
qualifiers.
o Database
Nodatabase
Displays the database attributes and characteristics. This
includes information such as the database root file name, the
number of buffers, the number of users, the repository path
name, and the characteristics for each storage area.
If you specify RMU Extract with the Option=Nodictionary_
References qualifier, the data dictionary path name is
ignored.
o Domains (or Fields)
Nodomains
Displays the domain definitions. If the domain was originally
defined using the data dictionary path name, the output
definition shows this. If the Option=Nodictionary_References
qualifier is specified, the data dictionary path name is
ignored and the column attributes are extracted from the
system tables.
o Forward_References
Noforward_References
Queries the dependency information in the database
(RDB$INTERRELATIONS) and extracts DECLARE FUNCTION and
DECLARE PROCEDURE statements for only those routines that
are referenced by other database objects. The default is
Noforward_Reference.
The Forward_References item is used in conjunction with other
Item keywords, for example, /Item=(All,Forward).
o Functions
Nofunctions
Displays external function definitions.
o Import
Noimport
Generates an RDO or SQL IMPORT script that defines every
storage area and row cache. The Language qualifier determines
whether Oracle RMU generates an RDO or SQL IMPORT script
(If you specify the Language=SQL or the Language=ANSI_SQL
qualifier, the same SQL IMPORT script is generated.) Because
the RDO interface does not accept many of the database options
added to recent versions of Oracle Rdb, Oracle Corporation
recommends that you specify the Language=SQL qualifier (or
accept the default).
The Items=Import qualifier is useful when you want to re-
create a database that is the same or similar to an existing
database. Editing the file generated by Oracle RMU to change
allocation parameters or add storage areas and so on is easier
than writing your own IMPORT script from scratch.
When Oracle RMU generates the IMPORT script, it uses an
interchange file name of rmuextract_rbr in the script.
Therefore, you must either edit the IMPORT script generated
by Oracle RMU to specify the interchange file that you want
to import, or assign the logical name RMUEXTRACT_RBR to your
interchange file name. (An interchange file is created by an
SQL or RDO EXPORT statement.) See Example 14 in the Examples
help entry under this command.
o Indexes (or Indices)
Noindexes
Displays index definitions, including storage map information.
o Load
Unload
Generates a DCL command procedure containing an RMU Load or
RMU Unload command for each table in the database. This item
must be specified explicitly, and is not included by default
when you use the Items=All qualifier.
Oracle RMU generates the Fields qualifier for the Load and
Unload scripts when you specify the Option=Full qualifier. If
you do not specify the Option=Full qualifier, the scripts are
generated without the Fields qualifier.
If you specify the RMU Extract command with the Item=Unload
qualifier, DCL commands are added to the script to create a
file with type .COLUMNS. This file defines all the unloaded
columns. The file name of the .COLUMNS file is derived from
the name of the extracted table. You can reference the file by
using the "@" operator within the Fields qualifer for the RMU
Load and RMU Unload commands.
Virtual columns, AUTOMATIC or COMPUTED BY table columns,
and VIEW calculated columns appear in the .COLUMNS file as
comments.
o Module
Nomodule
Displays procedure and function definitions. This item is
valid only when the Language specification is SQL; it is
ignored if the Language specification is RDO or ANSI_SQL.
o Outlines
Nooutlines
Displays query outline definitions. This item is valid only
when the Language specification is SQL; it is ignored if the
Language specification is RDO or ANSI_SQL.
o Procedures
Noprocedures
Extracts external procedures.
o Profiles
Noprofiles
Displays profiles as defined by the CREATE PROFILE statement.
o Protections
Noprotections
Displays the protection access control list (ACL) definitions.
If the protections are defined using SQL ANSI semantics, they
cannot be represented in RDO. In this case, the diagnostic
message warns you that the protections must be extracted using
the Language=SQL qualifier. If you specify Language=ANSI_SQL,
a diagnostic message warns you that the ACL-style protections
cannot be extracted in ANSI format. You must explicitly
specify the Protections option. It is not included by default
when you use the Items=All qualifier.
o Revoke_Entry
Norevoke_Entry
Extracts a SQL or RDO script that deletes the protections from
all access control lists in the database: database, table,
sequences, column, module, function, and procedure.
The output script contains a series of SQL REVOKE ENTRY
statements (or DELETE PROTECTION statements if the language
selected is RDO) that remove the access control entry for the
user and all objects.
o Role
Norole
Displays role definitions as defined by the SQL CREATE ROLE
statement. In addition, any roles that have been granted
are displayed as a GRANT statement. By default, roles are
not extracted, nor are they included when you specify the
Items=All qualifier.
o Schema
Noschema
Displays the schema definitions for an SQL multischema
database. This option is ignored if the interface is RDO.
o Sequence
Nosequence
Displays the sequence definitions in the database that were
originally defined with the SQL CREATE SEQUENCE statement.
o Security
Nosecurity
Displays RMU Audit commands based on information in the
database. This item must be specified explicitly, and is not
included by default when you use the Items=All qualifier.
o Storage_Maps
Nostorage_Maps
Displays storage map definitions, including the list
(segmented string) storage map.
o Synonyms
Nosynonyms
Generates a report of all the synonyms defined for the
database. All synonyms of a database object, including
synonyms of those synonyms, are grouped together. The output
is ordered by creation as recorded by the RDB$CREATED column.
This report is useful for viewing all synonyms or moving them
to other databases. However, since synonyms refer to many
different database objects, a single set of definitions is
usually not adequate when defining a new database. Oracle
Corporation recommends that you use the Option=Synonym
qualifier in most cases.
o Tables (or Relations)
Notables
Displays table definitions in the same order in which they
were created in the database.
If the table was originally defined using the data dictionary
path name, that path name is used for the definition.
If you specify the Option=Nodictionary_References qualifier,
the data dictionary path name is ignored and the table
attributes are extracted from the system tables.
If Item=Noconstraints is specified, constraint information is
not extracted for any table.
The Items=Tables qualifier handles domains in the following
ways:
- The output for this item reflects the original definitions.
If a column is based on a domain of a different name, the
BASED ON clause is used in RDO, and the domain name is
referenced by SQL.
- Any columns that are based on fields in a system table are
processed but generate warning messages.
- Certain domains created using RDO in a relation definition
cannot be extracted for RDO because it is not possible to
distinguish columns defined using a shorthand method as
shown in the example that follows. In this case, the column
FIELD_1 becomes or is defined as a domain.
DEFINE RELATION REL1.
FIELD_1 DATATYPE IS TEXT SIZE 10.
END.
However, this type of definition in SQL causes special
domains to be created with names starting with SQL$. In
this case, the SQL domain is translated into the following
data type:
CREATE TABLE TAB1
(COLUMN_1 CHAR(10));
The output for this item also includes the table-level
constraints that can be applied: PRIMARY KEY, FOREIGN KEY, NOT
NULL, UNIQUE, and CHECK. In the case of the CHECK constraint,
the expression might not be translated to or from RDO and SQL
due to interface differences.
o Triggers
Notriggers
Displays trigger definitions.
o User
Nouser
Displays user definitions as defined by the SQL CREATE USER
statement. In addition, if you also specify Role with the
Item qualifier, then any roles that have been granted to a
user are displayed as GRANT statements. By default, Users are
not displayed, nor are they displayed when you specify the
Items=All qualifier.
o Verify
Noverify
Causes the generation of an optimal DCL command procedure
containing multiple RMU Verify commands. Using this command
procedure is equivalent to performing a full verification
(RMU Verify with the All qualifier) for the database. This
command procedure can be broken down further into partial
command scripts to perform partial verify operations. These
partial command scripts can then be submitted to different
batch queues to do a full verify operation in parallel, or
they can be used to spread out a full verify operation over
several days by verifying a piece of the database at a time.
A partitioning algorithm is a procedure to determine what
portions of the database should be verified in the same
command script. For example, areas with interrelations
should be verified with the same partial command script. A
partitioning algorithm considers the following when creating a
partial command script from the equivalent RMU Verify command
with the All qualifier:
1. Each storage area is assigned to a partition.
2. For each table in the database, if the table is not
partitioned, the table is put in the partial command script
corresponding to that storage area; otherwise, if the table
is partitioned across several storage areas, the partitions
corresponding to all of the storage areas are merged into
one partial command script and the table is added to this
partial command script.
3. For each index in the database, the process shown in step 2
is followed.
4. For an index on a table, the index and table are merged
into one partial command script.
The scripts of partial RMU Verify commands are written in
the form of a command procedure. Each partial command script
is preceded by a label of the form STREAM_n: where n is an
integer greater than 1. For example, to execute the command
at label STREAM_3:, invoke the command procedure by using the
following syntax:
$ @<command-procedure-name> STREAM_3
The resultant command procedure is set up to accept up to four
parameters, P1, P2, P3, and P4, as shown in Parameters for
Generated Command File.
Table 9 Parameters for Generated Command File
Parameter Option Description
P1 Stream_n Specifies the command stream to be
executed. The variable n is the "number"
of the RMU Verify command stream to
be executed. If omitted, all command
streams are executed.
P2 [No]Log Specifies whether to use the Log
qualifier in the RMU Verify command
line. If omitted, the DCL verify switch
value is used.
P3 Read_Only | Provides the RMU Verify
Protected | Transaction_Type value. If omitted,
Exclusive Transaction_Type = Protected is used.
P4 Specifies the name of the output file
for the RMU Verify Output qualifier. If
omitted, Output = SYS$OUTPUT is used.
o Views
Noviews
Displays view definitions. If the database was defined using
SQL, it is possible that the view cannot be represented
in RDO. In this case, the diagnostic message warns that
the view definition is being ignored, and the user should
use LANGUAGE=SQL to extract the view. Note the following
transformations the RMU Extract command makes when it cannot
precisely replicate the SQL source code:
- The RMU Extract command cannot precisely replicate derived
table column names or correlation names for any select
expression.
The RMU Extract command generates new names for correlation
names (C followed by a number) and derived table column
names (F followed by a number).
For example, suppose you create a view, as follows:
SQL> ATTACH 'FILENAME mf_personnel';
SQL> CREATE VIEW DERIVED_1
cont> (F1) AS
cont> SELECT CAST(AVG(JOB_COUNT) AS INTEGER(2))
cont> FROM (SELECT EMPLOYEE_ID, COUNT (*)
cont> FROM JOB_HISTORY
cont> GROUP BY EMPLOYEE_ID) AS EMP_JOBS (EMPLOYEE_ID, JOB_COUNT);
SQL> COMMIT;
If you issue the following RMU Extract command, you receive
the output shown:
$ rmu/extract/item=view/option=(match:DERIVED_1%,noheader,filename_only) -
mf_personnel
set verify;
set language ENGLISH;
set default date format 'SQL92';
set quoting rules 'SQL92';
set date format DATE 001, TIME 001;
attach 'filename MF_PERSONNEL';
create view DERIVED_1
(F1) as
(select
CAST(avg(C2.F2) AS INTEGER(2))
from
(select C4.EMPLOYEE_ID, count(*)
from JOB_HISTORY C4
group by C4.EMPLOYEE_ID)
as C2 (F1, F2));
commit work;
- The RMU Extract command cannot generate the original SQL
source code for the user-supplied names of AS clauses. This
is particularly apparent when the renamed select expression
is referenced in an ORDER BY clause. In such a case, the
RMU Extract command generates correlation names in the form
RMU$EXT_n where n is a number.
For example, suppose you create a view, as follows:
SQL> SET QUOTING RULES 'SQL92';
SQL> CREATE DATA FILE xyz;
SQL> CREATE TABLE DOCUMENT
cont> (REPORT CHAR(10));
SQL> CREATE TABLE REPORTING
cont> (NAME CHAR(5));
SQL> CREATE TABLE "TABLES"
cont> (CODTAB CHAR(5));
SQL> CREATE VIEW VIEW_TEST
cont> (CREDIT,
cont> CODTAB,
cont> CODMON) AS
cont> SELECT
cont> C1.NAME,
cont> C2.CODTAB,
cont> (SELECT C7.REPORT FROM DOCUMENT C7) AS COM
cont> FROM REPORTING C1, "TABLES" C2
cont> ORDER BY C1.NAME ASC, C2.CODTAB ASC, COM ASC;
SQL>
If you issue the following RMU Extract command, you receive
the output shown:
$ RMU/EXTRACT/ITEM=VIEW MF_PERSONNEL.RDB
.
.
.
create view VIEW_TEST
(CREDIT,
CODTAB,
CODMON) as
select
C1.NAME,
C2.CODTAB,
(select DOCUMENT.REPORT from DOCUMENT) AS RMU$EXT_1
from REPORTING C1, "TABLES" C2
order by C1."NAME" asc, C2.CODTAB asc, RMU$EXT_1 asc;
o Volume
Novolume
Displays cardinality information in a PDL-formatted file for
use by Oracle Expert for Rdb. This item must be specified
explicitly, and is not included by default when the Items=All
qualifier is used.
o Workload
Noworkload
Generates a DCL command language script. The script is used
with the RMU Insert Optimizer_Statistics command to extract
the work load and statistics stored in the RDB$WORKLOAD table.
The unloaded information can be applied after a new database
is created using the SQL EXPORT and IMPORT statements, or
it can be applied to a similar database for use by the RMU
Collect Optimizer_Statistics/Statistic=Workload command.
This item must be specified explicitly, and is not included by
default when the Items=All qualifier is used. The default is
Noworkload.
You can modify the output of the Item=Workload qualifier by
specifying the following keywords with the Option qualifier:
o Audit_Comment
Each RMU Insert Optimizer_Statistics statement is preceded
by the created and altered date for the workload entry. The
default is Noaudit_comment.
o Filename_Only
The database file specification output for the RMU Insert
Optimizer_Statistics statement is abbreviated to just the
filename.
o Match
A subset of the workload entries based on the wildcard file
name is selected.
15.4.3 – Language
Language=lang-name
Allows you to select one of the following interfaces:
o SQL
When you specify the Language=SQL qualifier, Oracle RMU
generates the Oracle Rdb SQL dialect. The Oracle Rdb SQL
dialect is a superset of SQL92 Entry level, with language
elements from Intermediate and Full SQL92 levels. It also
contains language elements from SQL:1999 and extensions
specific to Oracle Rdb.
o ANSI_SQL
When you specify the Language=ANSI_SQL qualifier and specify
the Option=Normal qualifier, Oracle RMU tries to generate
ANSI SQL statements that conform to the ANSI X3.135-1989 SQL
standard.
When you specify the Language=ANSI_SQL qualifier and the
Option=Full qualifier, Oracle RMU tries to generate SQL
statements that conform to the current ANSI and ISO SQL
database language standards. Refer to the Oracle Rdb SQL
Reference Manual for more information.
Regardless of the Option parameter you specify, any Oracle
Rdb specific features (such as DATATRIEVE support clauses and
storage maps) are omitted.
o RDO
When you specify the RDO language option, Oracle RMU generates
RDO statements.
The default is Language=SQL.
The Language qualifier has no effect on the output generated by
the Items=Load, Items=Unload, and Items=Verify qualifiers. This
is because these qualifiers generate scripts that contain Oracle
RMU commands only.
15.4.4 – Log
Log[=log-file]
Nolog
Enable or disables log output during execution of the RMU Extract
command. The log includes the current version number of Oracle
Rdb, and the values of the parameter and qualifiers. The default
is Nolog. The default file extension is .log. If you specify Log
without specifying a file name, output is sent to SYS$OUTPUT.
15.4.5 – Options
Options=options-list
This qualifier is used to change the output of the RMU Extract
command. This qualifier is not applied to output created by the
Items=Unload, Items=Load, Items=Security, or the Items=Verify
qualifier.
The following options can be specified with the Options
qualifier:
o Audit_Comment
Noaudit_Comment
Annotates the extracted objects with the creation and last
altered timestamps as well as the username of the creator. The
date and time values are displayed using the current settings
of SYS$LANGUAGE and LIB$DT_FORMAT. Noaudit_Comment is the
default.
o Cdd_Constraints
Nocdd_Constraints
Specifies that tables extracted by pathname include all
constraints. The Option=Nocdd_Constraints qualifier is
equivalent to the Option=Defer_Constraints qualifier
for tables with a pathname. This option is ignored if
Item=Noconstraints is specified.
When you specify the Cdd_Constraints option and the
Dictionary_References option, the RMU Extract command does
not generate ALTER TABLE statements to add constraints,
but instead assumes they will be inherited from the data
dictionary.
When you use the Nocdd_Constraints option and the Dictionary_
References option, the RMU Extract command generates ALTER
TABLE statements to add FOREIGN KEY and CHECK constraints
after all base tables have been created.
o Cdd_References
Nocdd_References
This option is an alias for Dictionary_References.
o Column_Volume
Nocolumn_Volume
Directs the RMU Extract command to output the table, column,
and column segmented string cardinalities based on sorted
indexes. Note that this qualifier must be used in combination
with the Items=Volume qualifier. If the Items=Volume qualifier
is omitted, cardinalities are not displayed.
RMU Extract generates data of the following type:
Volume for schema MF_PERSONNEL
Default volatility is 5;
Table WORK_STATUS all is 3;
Table EMPLOYEES all is 100;
Column EMPLOYEE_ID all is 100;
Column LAST_NAME all is 83;
.
.
.
Table RESUMES all is 3;
List RESUME
Cardinality IS 3
Number of segments is 3
Average length of segments is 24;
o Debug
Nodebug
Dumps the internal representation for SQL clauses such as
AUTOMATIC AS, VALID IF, COMPUTED BY, MISSING_VALUE, DEFAULT_
VALUE, CONSTRAINTS, SQL DEFAULT, TRIGGERS, VIEWS, and STORAGE
MAPS during processing. The keyword Debug cannot be specified
with the keywords Normal or Full in the same Options qualifier
list.
o Defer_Constraints
Nodefer_Constraints
Forces all constraints to be defined (using an ALTER TABLE
statement) after all tables have been extracted. This option
is ignored if Item=Noconstraints is specified.
If Option=Nodefer_Constraints is specified, all constraints
are generated with the CREATE TABLE statement. If neither
Option=Defer_Constraints nor Option=Nodefer_Constraints is
specified, the default behavior is to generate NOT NULL,
UNIQUE, and PRIMARY KEY constraints with the CREATE TABLE
statement, and generate CHECK and FOREIGN KEY constraints in a
subsequent ALTER TABLE statement.
o Dictionary_References
Nodictionary_References
Directs the RMU Extract command to output definitions for
domains (fields) and tables (relations) that reference data
dictionary path names rather than using the information
contained in the Oracle Rdb system tables. In addition to
the database statements, this option also displays the data
dictionary path name stored in the database. Refer to Example
8 in the Examples help entry under this command for an example
of using this option.
If neither the Option=Dictionary_References qualifier nor the
Option=Nodictionary_References qualifier is specified, then
Oracle RMU examines the RDB$RELATIONS and RDB$FIELDS system
tables to determine whether or not any domains or tables refer
to the data dictionary. If references are made to the data
dictionary, then the Option=Dictionary_References qualifier is
the default. Otherwise, it is assumed that the data dictionary
is not used, and the default is the Option=Nodictionary_
References qualifier.
The Nodictionary_References keyword causes all references to
the data dictionary to be omitted from the output. This is
desirable if the database definition is to be used on a system
without the data dictionary or in a testing environment.
If the Items=Database and Option=Nodictionary_References
qualifiers are selected, the data dictionary path name stored
in the system table is ignored. For SQL, the NO PATHNAME
clause is generated, and for RDO, the clause DICTIONARY IS
NOT USED is generated.
If the Items qualifier specifies Domain or Table, and the
Option qualifier specifies Nodictionary_References, the
output definition includes all attributes stored in the system
tables.
o Disable_Objects
Nodisable_Objects
Requests that all disabled objects be written to the RMU
Extract output file as disabled (see the description for the
Omit_Disabled option). Disable_Objects is the default.
The Nodisable_Objects option displays the objects but omits
the disabling syntax.
o Domains
Nodomains
The Nodomains option is used to eliminate the domain name
from within metadata objects. The domain name is replaced
by the underlying data type. This option is designed for use
with tools that do not recognize this SQL:1999 SQL language
feature.
Effect on /Language=SQL output:
The default is Option=Domains.
A SQL script generated when Option=Nodomains was specified
does not include the domain name in the CREATE TABLE column
definition, CREATE FUNCTION or CREATE PROCEDURE parameter
definitions, or any value expression which uses the CAST
function to convert an expression to a domain data type
(such as the CREATE VIEW and CREATE TRIGGER statements).
The output generated by the RMU Extract command for
functions and procedures in the CREATE MODULE statement
is not affected by the Option=Nodomains option because it
is based on the original source SQL for the routine body
which is not edited by the RMU Extract command.
Effect on /Language=ANSI_SQL output:
The default is Option=Nodomains when the Option=Normal
qualifier is specified or is the default. The RMU Extract
command does not generate a list of domain definitions even
if the Items=Domains or Items=All qualifier is used. If
you want the generated script to include a list of domain
definitions, use the Options=Domains qualifier:
$RMU/EXTRACT/LANGUAGE=ANSI_SQL/OPTION=DOMAINS databasename
Use the Option=Full qualifier to have the use of domains
included in the syntax generated for SQL:1999.
o Filename_Only
Nofilename_Only
Causes all file specifications extracted from the database to
be truncated to only the file name. The use of this qualifier
allows for easier relocation of the new database when you
execute the created procedure.
o Full
Nofull
Specifies that if metadata that cannot be translated from the
language that defined the database to the equivalent construct
in the language specified with the Language qualifier (for
example, DEFAULT for SQL and the language selected was
RDO) then the metadata is displayed in comments, or Oracle
RMU attempts to create a translation that most closely
approximates the original construct.
Nofull is identical to the Normal option.
o Group_Table
Nogroup_Table
Specifies that indexes and storage maps are to be extracted
and grouped by table. The table is extracted first, than any
PLACEMENT VIA index, then any storage map, and finally all
other indexes.
When the Group_Table qualifier is combined with the
Option=Match qualifier, you can select a table and its related
storage map and indexes.
The default behavior is Nogroup_Table, which means that items
are extracted and grouped by type.
o Header
Noheader
Specifies that the script header and section headers are
included in the extract. This is the default. Because the
header has an included date, specifying Noheader to suppress
the header may allow easier comparison with other database
extractions when you use the OpenVMS DIFFERENCES command.
o Limit_Volume=nn
Nolimit_Volume
Specifies the maximum amount of data to be scanned for
segmented fields. The RMU Extract command stops scanning when
the limit nn is reached. The number of segments and average
length of segments are calculated from the data that was
scanned. Limit_Volume=1000 is the default.
Nolimit_Volume specifies that a full scan for segmented
strings should be done.
o Match:match-string
The Match option allows selection of wildcard object names
from the database. The match string can contain the standard
SQL wildcard characters: the percent sign (%) to match any
number of characters, and the underscore (_) to match a single
character. In addition, the backslash (\) can be used to
prefix these wildcards to prevent them from being used in
matching. If you are matching a literal backslash, use the
backslash twice, as shown in the following example:
Option=Match:"A1\\A2%"
The match string defaults to the percent sign (%) so that all
objects are selected. To select those objects that start with
JOB, use the qualifier Option=Match:"JOB%".
From the mf_personnel database, this command displays the
definitions for the domains JOB_CODE_DOM and JOB_TITLE_DOM,
the tables JOBS and JOB_HISTORY, the index JOB_HISTORY_HASH,
and the storage maps JOBS_MAP and JOB_HISTORY_MAP.
The match string can be quoted as shown if the string contains
spaces or other punctuation characters used by DCL or other
command language interfaces. Most object names are space
filled; therefore, follow the match string with the percent
sign (%) to match all trailing spaces.
The Match option can be used in conjunction with the Item
qualifier to extract specific tables, indexes, and so on,
based on their name and type.
If Group_Table is specified, the match name is assumed
to match a table name; all indexes for that table will be
extracted when the Items=Index qualifier is specified.
o Multischema
Nomultischema
Displays the SQL multischema names of database objects. It is
ignored by the Relational Database Operator (RDO).
The Nomultischema option displays only the SQL single-schema
names of database objects.
o Normal
Nonormal
Includes only the specific source language code used to define
the database. This is the default.
In addition, this option propagates RDO VALID IF clauses as
column CHECK constraints with the attribute NOT DEFERRABLE
when the Language specification is SQL or ANSI_SQL. When an
RDO VALID IF clause is converted, Oracle RMU generates error
messages similar to the following in your log file:
%RMU-W-UNSVALIDIF, VALID IF clause not supported in SQL - ignored
for DEGREE.
%RMU-I-COLVALIDIF, changed VALID IF clause on domain DEGREE to
column check constraint for DEGREES.DEGREE
The first message is a warning that the VALID IF clause could
not be added to the domain definition because the VALID IF
clause is not supported by SQL. The second message is an
informational message that tells you the VALID IF clause was
changed to a column check constraint.
o Omit_Disabled
Noomit_Disabled
Causes all disabled objects to be omitted from the output
of the RMU Extract command. This includes indexes that have
MAINTENANCE IS DISABLED, USERS with ACCOUNT LOCK, and disabled
triggers and constraints.
The Noomit_Disabled option causes all disabled objects to be
included in the output from the RMU Extract command. Noomit_
Disabled is the default.
o Order_By_Name
Noorder_By_Name
Order_by_Name displays the storage area, cache, and journal
names for the items Database, Alter_Database (also known as
Change_Database), and Import in alphabetic order by the ASCII
collating sequence.
Noorder_By_Name displays the storage area, cache, and journal
names for the items Database, Alter_Database, and Import
in approximate definition order. The default ordering is
approximate because a DROP STORAGE AREA, DROP CACHE, or
DROP JOURNAL statement frees a slot that can be reused, thus
changing the order. Noorder_By_Name is the default.
You can use the logical name RDMS$BIND_SORT_WORKFILES to
allocate work files, if needed.
NOTE
If the identifier character set for the database is not
MCS or ASCII, then this option is ignored. Characters
from other character sets do not sort appropriately under
the ASCII collating sequence.
o Synonyms
Nosynonyms
Causes the synonyms to be extracted immediately after the
referenced object, as shown in the following excerpt from an
output file created using the Item=Table qualifier:
create table HISTORICAL_JOB_INFORMATION (
EMPLOYEE_ID
INTEGER,
USER_ID
CHAR (15),
JOB_TITLE TITLE,
START_DATE
DATE,
CURRENT_SALARY MONEY_IN_DOLLARS
default NULL);
create synonym JOBHIST
for table HISTORICAL_JOB_INFORMATION;
Because synonyms can be referenced from almost any database
object, if you keep the definitions close to the target object
you can eliminate occurrences of undefined symbols during
script execution. The default is Option=Synonyms.
Use the Option=Nosynonyms qualifier to disable the display
of CREATE SYNONYM statements. The synonyms referenced in
database objects such as module, procedure, trigger, and table
definitions are still extracted.
o Volume_Scan
Novolume_scan
Directs the RMU Extract command to perform queries to
calculate the cardinality of each table, if both the
Items=Volume and Options=Volume_Scan qualifiers are specified.
The default is Options=Novolume_Scan, in which case the
approximate cardinalities are read from the RDB$RELATIONS
system table. The Options=Volume_Scan option is ignored if the
Items=Volume qualifier is not selected.
o Width=n
Specifies the width of the output files. You can select values
from 60 to 512 characters. The default of 80 characters is
appropriate for most applications.
15.4.6 – Output
Output=[out-file]
Nooutput
Names the file to which the RMU Extract command writes the data
definition language (DDL) statements. The file extension defaults
to .rdo, if you specify the Language=RDO qualifier; .sql, if
you specify either the Language=SQL or the Language=ANSI_SQL
qualifier. If you specify the Volume option only, the output file
type defaults to .pdl. If you specify Load, Security, Verify, or
Unload only, the output file type defaults to .com. The default
is SYS$OUTPUT. If you disable the output by using the Nooutput
qualifier, command scripts are not written to an output file. The
Log output can be used to determine which features used by the
database cannot be converted to SQL.
Using Qualifiers to Determine Output Selection shows the
effects of the various combinations of the Language and Options
qualifiers.
Table 10 Using Qualifiers to Determine Output Selection
Language Option Effect on Output
RDO Normal Generates RDO syntax.
Full Generates RDO syntax.
Dictionary_ Outputs path name references to the
References repository.
Nodictionary_ Converts path name references to
References the repository to RDO syntax.
Multischema Ignored by RDO.
SQL Normal Generates SQL syntax.
Full Tries to convert RDO specific
features to SQL (for example, the
VALID IF clause).
Dictionary_ Outputs path name references to the
References data dictionary.
Nodictionary_ Converts path name references to
References the data dictionary to SQL syntax.
Multischema Selects SQL multischema naming of
objects.
ANSI_ Normal Generates ANSI/ISO syntax.
SQL
Full Generates ANSI/ISO SQL92 syntax
supported by SQL.
Dictionary_ Ignored for ANSI_SQL.
References
Nodictionary_ Converts path name references to
References the data dictionary to SQL syntax.
This is the default for ANSI_SQL.
Multischema Selects SQL multischema naming of
objects.
Any Audit_Comment Adds a comment before each
definition.
Debug Annotates output where possible.
Domains Replaces domain names for CAST
expression, column and parameter
definitions, and returns clauses
with SQL data type.
Filename_Only Truncates all file specifications
extracted from the database to only
the file name.
Volume_Scan Forces a true count of Tables. Only
valid for Items=Volume.
15.4.7 – Transaction Type
Transaction_Type[=(transaction_mode,options,...)]
Allows you to specify the transaction mode, isolation level, and
wait behavior for transactions.
Use one of the following keywords to control the transaction
mode:
o Automatic
When Transaction_Type=Automatic is specified, the transaction
type depends on the current database settings for snapshots
(enabled, deferred, or disabled), transaction modes available
to the process, and the standby status of the database.
Automatic mode is the default.
o Read_Only
Starts a READ ONLY transaction.
o Write
Starts a READ WRITE transaction.
Use one of the following options with the keyword Isolation_
Level=[level] to specify the transaction isolation level:
o Read_Committed
o Repeatable_Read
o Serializable. Serializable is the default setting.
Refer to the SET TRANSACTION statement in the Oracle Rdb SQL
Reference Manual for a complete description of the transaction
isolation levels.
Specify the wait setting by using one of the following keywords:
o Wait
Waits indefinitely for a locked resource to become available.
Wait is the default behavior.
o Wait=n
The value you supply for n is the transaction lock timeout
interval. When you supply this value, Oracle Rdb waits n
seconds before aborting the wait and the RMU Extract session.
Specifying a wait timeout interval of zero is equivalent to
specifying Nowait.
o Nowait
Will not wait for a locked resource to become available.
15.5 – Usage Notes
o To use the RMU Extract command for a database, you must have
the RMU$UNLOAD privilege in the root file access control
list (ACL) for the database or the OpenVMS SYSPRV or BYPASS
privilege.
o For tutorial information on using output from the RMU Extract
command to load or unload a database, refer to the Oracle Rdb
Guide to Database Design and Definition.
o Included in the output from the RMU Extract command is the
SQL SET DEFAULT DATE FORMAT statement. This SQL statement
determines whether columns with the DATE data type or CURRENT_
TIMESTAMP built-in function are interpreted as OpenVMS or
SQL92 format. The RMU Extract command always sets the default
to SQL92. The SQL92 format DATE and CURRENT_TIMESTAMP contain
only the YEAR to DAY fields. The OpenVMS format DATE and
CURRENT_TIMESTAMP contain YEAR to SECOND fields.
If your database was defined with OpenVMS format DATE and
CURRENT_TIMESTAMP, the default SQL SET DEFAULT DATE FORMAT
'SQL92' in the RMU Extract output causes errors to be returned
when you attempt to execute that output. For example, when you
define a trigger:
SQL> CREATE TRIGGER SALARY_HISTORY_CASCADE_UPDATE
cont> AFTER UPDATE OF JOB_CODE ON JOB_HISTORY
cont> (UPDATE SALARY_HISTORY SH
cont> SET SALARY_START = CURRENT_TIMESTAMP
cont> WHERE (SH.EMPLOYEE_ID = JOB_HISTORY.EMPLOYEE_ID)
cont> ) for each row;
%SQL-F-UNSDATASS, Unsupported date/time assignment from <Source>
to SALARY_START
You can avoid these errors by editing the output from the RMU
Extract command. Replace the SET DEFAULT DATE FORMAT 'SQL92'
statement with SET DEFAULT DATE FORMAT 'VMS'. If the problem
occurs in trigger definitions, you can use the CAST function
instead. Specify CAST(CURRENT_TIMESTAMP AS DATE VMS) with each
trigger definition that references CURRENT_TIMESTAMP. (You
cannot use the CAST function within the DEFAULT clause of an
SQL CREATE statement).
o The following list contains a description of what the RMU
Extract command generates when it encounters certain RDO
statements:
- RDO and the data dictionary have the concept of validation
clauses at the domain level. The ANSI/ISO SQL92 standard
allows CHECK constraints defined on domains. While the
actions of the ANSI/ISO CHECK constraint do differ from
VALID IF in some respects, the RMU Extract command extracts
the VALID IF clauses as domain CHECK constraints if you
specify the Language=SQL and Option=Full qualifiers.
- RDO multiline descriptions
Because the RDO interface removes blank lines in multiline
descriptions, the description saved in the metadata is not
identical to that entered by the database definition. The
RMU Extract command therefore cannot completely reconstruct
the original description.
- Some RDO trigger definitions
RDO trigger definitions that contain a trigger action
within a join of two or more tables generates invalid SQL
syntax. For example, the following RDO trigger definition
includes a join with an embedded ERASE statement. When the
RMU Extract command encounters this statement, Oracle RMU
generates the invalid SQL trigger definition shown.
DEFINE TRIGGER EXAMPLE
AFTER ERASE
FOR C1 IN EMPLOYEES
EXECUTE
FOR C2 IN JOB_HISTORY
CROSS C3 IN EMPLOYEES
WITH (((C2.EMPLOYEE_ID = C3.EMPLOYEE_ID)
AND (C2.JOB_END MISSING))
AND (C3.EMPLOYEE_ID = C2.EMPLOYEE_ID))
ERASE C2
END_FOR
FOR EACH RECORD.
CREATE TRIGGER EXAMPLE
AFTER DELETE ON EMPLOYEES
(DELETE FROM JOB_HISTORY C2, EMPLOYEES C3
WHERE (((C2.EMPLOYEE_ID = C3.EMPLOYEE_ID)
AND (C2.JOB_END IS NULL))
AND (C3.EMPLOYEE_ID = C2.EMPLOYEE_ID))
) FOR EACH ROW;
Note that in Oracle Rdb Version 4.1 and higher, including
a trigger action within a join of two or more tables
is invalid RDO syntax. For more information on this RDO
restriction, see the ERASE and MODIFY entries in RDO HELP.
o Oracle CDD/Repository Version 5.3 and higher support table
and column constraint definition and maintenance through CDO.
The RMU Extract command, by default, assumes all constraint
maintenance is with SQL and so follows each CREATE TABLE
with an ALTER TABLE FROM pathname to add the constraints.
However, this is no longer necessary if you are using the
later versions of Oracle CDD/Repository. To disable the output
of the SQL ALTER TABLE statements which add constraints use
the Option=Cdd_Constraint qualifier.
o If the Transaction_Type qualifier is omitted from the RMU
Extract command line, a READ ONLY transaction is started
against the database. This behavior is provided for backward
compatibility with prior Oracle Rdb releases. If the
Transaction_Type qualifier is specified without a transaction
mode, the default value Automatic is used.
o If the database has snapshots disabled and the Transaction_
Type qualifier was omitted, the transaction is restarted as
READ WRITE ISOLATION LEVEL READ COMMITTED to reduce the number
of rows locked by operations performed with the Option=Volume_
Scan qualifier enabled.
o When Transaction_Type=Write is specified, the RMU Extract
process does not attempt to write to the database tables.
o In previous versions, Oracle Rdb used derived column names
based on position, for example, F1, F2. With release 7.0.6.4
and later, Oracle Rdb promotes the column names from the base
table into the derived column name list. The result is a more
readable representation of the view or trigger definition.
In the following example the column name EMPLOYEE_ID is
propagated through the derived table. In previous releases
this would be named using a generic label F1.
create view SAMPLE_V
(EMPLOYEE_ID,
COUNTS) as
select
C1.EMPLOYEE_ID,
C1.F2
from
(select C2.EMPLOYEE_ID,
(select count(*) from SALARY_HISTORY C3
where (C3.EMPLOYEE_ID = C2.EMPLOYEE_ID))
from JOB_HISTORY C2) as C1 ( EMPLOYEE_ID, F2 )
order by C1.F2 asc;
o The following list shows the equivalent SQL expressions
matched by the RMU Extract process:
- NULLIF (a, b) is eqivalent to
CASE
WHEN a = b THEN NULL
ELSE a
END
- NVL (a, ..., b) or COALESCE (a, ..., b) is equivalent to
CASE
WHEN a IS NOT NULL THEN a
...
ELSE b
END
- The simple CASE expression
CASE a
WHEN b THEN v1
WHEN NULL THEN v2
...
ELSE v3
END
is equivalent to
CASE
WHEN a = b THEN v1
WHEN a IS NULL THEN v2
...
ELSE v3
END
The RMU Extract procedure tries to decode the internal
representation to as compact a SQL expression as possible.
o The RMU Extract procedure decodes case expressions into ABS
(Absolute) functions:
ABS(a) is equivalent to:
CASE
WHEN a < 0 THEN -a
ELSE a
END
In addition, similar forms of CASE expression are also
converted to ABS:
CASE
WHEN a <= 0 THEN -a
ELSE a
END
CASE
WHEN a > 0 THEN a
ELSE -a
END
CASE
WHEN a >= 0 THEN a
ELSE -a
END
It is possible that the RMU Extract process will change
existing CASE expressions into this more compact syntax, even
if they were not originally coded as an ABS function call.
o If the Group_Table option is used and the Item qualifier omits
one or more of the Table, Index, or Storage_Map keywords, only
the included items are displayed. For example, to extract just
the indexes for the EMPLOYEES table:
$ RMU/EXTRACT/ITEM=INDEX/OPTION=(GROUP_TABLE,MATCH=EMPLOYEES%)
To extract only the storage map and indexes for a table, use
the following command:
$ RMU/EXTRACT/ITEM=(STORAGE_MAP,INDEX)/OPTION=(GROUP_TABLE, -
_$ MATCH=EMPLOYEES%)
o If the name of the LIST storage map is not known, it can be
extracted using the following generic command:
$ RMU/EXTRACT/ITEM=STORAGE_MAP/OPTION=(GROUP_TABLE, -
_$ MATCH=RDB$SEGMENTED_STRING%)
15.6 – Examples
Example 1
The following command extracts these database items:
COLLATING_SEQUENCES, DOMAINS, TABLES, INDEXES, STORAGE_MAPS,
VIEWS, SEQUENCES, and TRIGGERS.
The All option is the default. The All or Noall option can be
used in conjunction with other items to select specific output.
For example, the Items=(All,Nodatabase) qualifier selects all
metadata items except the physical database characteristics.
$ RMU/EXTRACT/ITEM=(ALL, NODATABASE) MF_PERSONNEL
Example 2
The following command generates a DCL command procedure
containing an RMU Load command for each table in the database:
$ RMU/EXTRACT/ITEMS=LOAD MF_PERSONNEL
Example 3
The following command displays the protection access control list
(ACL) definitions in the mf_personnel.rdb database:
$ RMU/EXTRACT/ITEMS=PROTECTIONS MF_PERSONNEL.RDB
Example 4
The following command generates a DCL command procedure
containing an RMU Unload command for each table in the database:
$ RMU/EXTRACT/ITEMS=UNLOAD MF_PERSONNEL.RDB
Example 5
The following example displays index definitions:
$ RMU/EXTRACT/ITEMS=INDEXES MF_PERSONNEL
Example 6
The following example displays domain and table definitions. Note
that the Noall option could have been omitted.
$ RMU/EXTRACT/ITEMS=(NOALL,DOMAINS,TABLES) MF_PERSONNEL
Example 7
The following example displays definitions for domains (fields)
and tables (relations) that reference data dictionary path names
rather than using the information contained in the Oracle Rdb
system tables. In addition to the database statements, it also
references the data dictionary path name stored in the database,
as shown in the following example:
$ RMU/EXTRACT/LANG=SQL/ITEM=ALL/OPTION=DIC/OUTPUT=CDD_MODEL.LOG/LOG= -
_$ CDD_EXTRACT.LOG CDD_SQL_DB
Example 8
The following example creates a command procedure containing
a script of partial RMU Verify commands or verify command
partitions for the mf_personnel database. This command procedure
was created with the following RMU Extract command:
$ RMU/EXTRACT/ITEM=VERIFY MF_PERSONNEL
Example 9
The following command displays a query outline definition that
was previously added to the mf_personnel database:
$ RMU/EXTRACT/ITEMS=(OUTLINES) MF_PERSONNEL
Example 10
The following command displays the after-image journal (.aij)
file configuration for mf_personnel:
$ RMU/EXTRACT/ITEMS=(ALTER_DATABASE) MF_PERSONNEL
Example 11
The following command displays the function definitions in mf_
personnel for functions previously created using SQL:
$ RMU/EXTRACT/ITEM=FUNCTION MF_PERSONNEL
Example 12
The following command displays the table and column cardinalities
based on sorted indexes:
$ RMU/EXTRACT/OPTION=COLUMN_VOLUME/ITEM=VOLUME MF_PERSONNEL
Example 13
The following example:
o Executes an SQL EXPORT statement to create an interchange
file.
o Executes an RMU Extract command with the Item=Import
qualifier to generate an Import script. In addition, the
Option=Filename_Only qualifier is specified to prevent full
file specifications from appearing in the SQL IMPORT script.
(If full file specifications are used, you cannot test the
script without replacing the database that was exported.)
o Defines a logical to define the interchange file name used in
the Import script file.
o Executes the Import script file.
SQL> -- Create interchange file, SAVED_PERS.RBR.
SQL> --
SQL> EXPORT DATABASE FILENAME MF_PERSONNEL.RDB INTO SAVED_PERS.RBR;
SQL> EXIT;
$ !
$ RMU/EXTRACT/ITEM=IMPORT/OPTION=FILENAME_ONLY/OUTPUT=IMPORT_PERS.SQL -
_$ MF_PERSONNEL
$ DEFINE/USER RMUEXTRACT_RBR SAVED_PERS.RBR
$ !
$ SQL$
SQL> @IMPORT_PERS.SQL
SQL> set language ENGLISH;
SQL> set default date format 'SQL92';
SQL> set quoting rules 'SQL92';
SQL> set date format DATE 001, TIME 001;
SQL>
SQL> -- RMU/EXTRACT for Oracle Rdb V7.2-00 2-JAN-2006 15:34:38.63
SQL> --
SQL> -- Physical Database Definition
SQL> --
SQL> -----------------------------------------------------------------
SQL> import database from rmuextract_rbr
cont> filename 'MF_PERSONNEL'
.
.
.
Example 14
The following example shows an extract from the generated script
when the SYS$LANGUAGE and LIB$DT_FORMAT symbols are defined.
The language and format will default to ENGLISH and the standard
OpenVMS format if these logical names are not defined.
$ DEFINE LIB$DT_FORMAT LIB$DATE_FORMAT_002,LIB$TIME_FORMAT_001
$ DEFINE SYS$LANGUAGE french
$ RMU/EXTRACT/OUT=SYS$OUTPUT/ITEM=DOMAIN MF_PERSONNEL/OPT=AUDIT_COMMENT
.
.
.
-- Created on 8 janvier 2006 13:01:31.20
-- Never altered
-- Created by RDB_EXECUTE
--
SQL> CREATE DOMAIN ADDRESS_DATA_1
cont> CHAR (25)
cont> comment on domain ADDRESS_DATA_1 is
cont> ' Street name';
.
.
.
Example 15
If a database has snapshots set to ENABLED DEFERRED, it may
be preferable to start a read/write transaction. In this
environment, using the Transaction_type=(Read_only) qualifier
causes a switch to a temporary snapshots ENABLED IMMEDIATE state.
This transition forces the READ ONLY transaction to wait while
all READ WRITE transactions complete, and then all new READ WRITE
transactions performing updates will start writing rows to the
snapshot files for use by possible read only transactions. To
avoid this problem use an RMU Extract command specifying a READ
WRITE ISOLATION LEVEL READ COMMITTED transaction.
$ RMU/EXTRACT/ITEM=TABLE/OUT=TABLES.SQL-
/TRANSACTION_TYPE=(WRITE,ISOLATION=READ)-
SAMPLE.RDB
Example 16
This example specifies the options which were the default
transaction style in prior releases.
$ RMU/EXTRACT/ITEM=TABLE/OUT=TABLES.SQL-
/TRANSACTION_TYPE=(READ_ONLY)-
SAMPLE.RDB
Example 17
If the database currently has snapshots deferred, it may be more
efficient to start a read-write transaction with isolation level
read committed. This allows the transaction to start immediately
(a read-only transaction may stall), and the selected isolation
level keeps row locking to a minimum. This could be explicitly
stated by using the following command:
$ RMU/EXTRACT-
/TRANSACTION_TYPE=(WRITE,ISOLATION=READ_COMMITTED)-
SAMPLE.RDB
Using a transaction type of automatic adapts to different
database settings:
$ RMU/EXTRACT-
/TRANSACTION_TYPE=(AUTOMATIC)-
SAMPLE.RDB
Example 18
This example shows the use of the Item=Workload qualifier to
create a DCL command language script.
$ RMU/EXTRACT/ITEM=WORKLOAD -
SCRATCH/LOG/OPTION=(FILENAME,AUDIT)
$! RMU/EXTRACT for Oracle Rdb V7.2-00 7-JAN-2006 22:00:42.72
$!
$! WORKLOAD Procedure
$!
$!---------------------------------------------------------------------
$ SET VERIFY
$ SET NOON
$
$! Created on 7-JAN-2006 10:12:26.36
$! Last collected on 7-JAN-2006 22:00:34.47
$!
$ RMU/INSERT OPTIMIZER_STATISTICS -
SCRATCH -
/TABLE=(CUSTOMERS) -
/COLUMN_GROUP=(CUSTOMER_NAME) -
/DUPLICITY_FACTOR=(4.0000000) -
/NULL_FACTOR=(0.0000000) /LOG
$
$! Created on 7-JAN-2006 10:12:26.36
$! Last collected on 7-JAN-2006 22:00:34.58
$!
$ RMU/INSERT OPTIMIZER_STATISTICS -
SCRATCH -
/TABLE=(RDB$FIELDS) -
/COLUMN_GROUP=(RDB$FIELD_NAME) -
/DUPLICITY_FACTOR=(1.7794118) -
/NULL_FACTOR=(0.0000000) /LOG
$
.
.
.
$ SET NOVERIFY
$ EXIT
Example 19
The following example shows the use of the Match option to select
a subset of the workload entries based on the wildcard file name.
$ RMU/EXTRACT/ITEM=WORKLOAD -
SCRATCH/LOG/OPTION=(FILENAME,AUDIT,MATCH:RDB$FIELDS%)
$! RMU/EXTRACT for Oracle Rdb V7.2-00 8-JAN-2006 10:53
$!
$! WORKLOAD Procedure
$!
$!------------------------------------------------------------------------
$ SET VERIFY
$ SET NOON
$
! Created on 7-JAN-2006 15:18:02.30
$ SET NOON
$
$! Created on 7-JAN-2006 15:18:02.30
$! Last collected on 7-JAN-2006 18:25:04.27
$!
$ RMU/INSERT OPTIMIZER_STATISTICS -
SCRATCH -
/TABLE=(RDB$FIELDS) -
/COLUMN_GROUP=(RDB$FIELD_NAME) -
/DUPLICITY_FACTOR=(1.0000000) -
/NULL_FACTOR=(0.0000000) /LOG
$ SET NOVERIFY
$ EXIT
Example 20
The following example shows the use of Item options Defer_
Constraints, Constraints, and Match to extract a table and its
constraints.
$ RMU/EXTRACT/ITEM=(TABLE,CONSTRAINT)-
_$ /OPTION=(FILENAME_ONLY,NOHEADER,-
_$ DEFER_CONSTRAINT,MATCH:EMPLOYEES%) -
_$ MF_PERSONNEL
set verify;
set language ENGLISH;
set default date format 'SQL92';
set quoting rules 'SQL92';
set date format DATE 001, TIME 001;
attach 'filename MF_PERSONNEL';
create table EMPLOYEES (
EMPLOYEE_ID ID_DOM,
LAST_NAME LAST_NAME_DOM,
FIRST_NAME FIRST_NAME_DOM,
MIDDLE_INITIAL MIDDLE_INITIAL_DOM,
ADDRESS_DATA_1 ADDRESS_DATA_1_DOM,
ADDRESS_DATA_2 ADDRESS_DATA_2_DOM,
CITY CITY_DOM,
STATE STATE_DOM,
POSTAL_CODE POSTAL_CODE_DOM,
SEX SEX_DOM,
BIRTHDAY DATE_DOM,
STATUS_CODE STATUS_CODE_DOM);
comment on table EMPLOYEES is
'personal information about each employee';
alter table EMPLOYEES
add constraint EMP_SEX_VALUES
check(EMPLOYEES.SEX in ('M', 'F', '?'))
deferrable
add constraint EMP_STATUS_CODE_VALUES
check(EMPLOYEES.STATUS_CODE in ('0', '1', '2', 'N'))
deferrable
alter column EMPLOYEE_ID
constraint EMPLOYEES_PRIMARY_EMPLOYEE_ID
primary key
deferrable;
commit work;
Example 21
The following example shows the use of the option Group_Table to
extract a table and its indexes:
$ rmu/extract/item=(table,index)-
_$ /option=(group_table,match=employees%,-
_$ filename_only,noheader) db$:mf_personnel
set verify;
set language ENGLISH;
set default date format 'SQL92';
set quoting rules 'SQL92';
set date format DATE 001, TIME 001;
attach 'filename MF_PERSONNEL';
create table EMPLOYEES (
EMPLOYEE_ID ID_DOM
constraint EMPLOYEES_PRIMARY_EMPLOYEE_ID
primary key
deferrable,
LAST_NAME LAST_NAME_DOM,
FIRST_NAME FIRST_NAME_DOM,
MIDDLE_INITIAL MIDDLE_INITIAL_DOM,
ADDRESS_DATA_1 ADDRESS_DATA_1_DOM,
ADDRESS_DATA_2 ADDRESS_DATA_2_DOM,
CITY CITY_DOM,
STATE STATE_DOM,
POSTAL_CODE POSTAL_CODE_DOM,
SEX SEX_DOM,
BIRTHDAY DATE_DOM,
STATUS_CODE STATUS_CODE_DOM);
comment on table EMPLOYEES is
'personal information about each employee';
create unique index EMPLOYEES_HASH
on EMPLOYEES (
EMPLOYEE_ID)
type is HASHED SCATTERED
store
using (EMPLOYEE_ID)
in EMPIDS_LOW
with limit of ('00200')
in EMPIDS_MID
with limit of ('00400')
otherwise in EMPIDS_OVER;
create unique index EMP_EMPLOYEE_ID
on EMPLOYEES (
EMPLOYEE_ID
asc)
type is SORTED
node size 430
disable compression;
create index EMP_LAST_NAME
on EMPLOYEES (
LAST_NAME
asc)
type is SORTED;
commit work;
alter table EMPLOYEES
add constraint EMP_SEX_VALUES
check(EMPLOYEES.SEX in ('M', 'F', '?'))
deferrable
add constraint EMP_STATUS_CODE_VALUES
check(EMPLOYEES.STATUS_CODE in ('0', '1', '2', 'N'))
deferrable;
commit work;
Example 22
The following example shows the output when you use the
Item=Revoke_Entry qualifier:
$ RMU/EXTRACT/ITEM=REVOKE_ENTRY ACCOUNTING_DB
...
-- Protection Deletions
--
--------------------------------------------------------------------------------
revoke entry
on database alias RDB$DBHANDLE
from [RDB,JAIN];
revoke entry
on database alias RDB$DBHANDLE
from [RDB,JONES];
revoke entry
on database alias RDB$DBHANDLE
from PUBLIC;
revoke entry
on table ACCOUNT
from [RDB,JONES];
revoke entry
on table ACCOUNT
from PUBLIC;
revoke entry
on table ACCOUNT_BATCH_PROCESSING
from [RDB,JONES];
revoke entry
on table ACCOUNT_BATCH_PROCESSING
from PUBLIC;
revoke entry
on table BILL
from [RDB,JONES];
revoke entry
on table BILL
from PUBLIC;
...
Example 23
The following example shows sample output for the WORK_STATUS
table of MF_PERSONNEL. The uppercase DCL commands are generated
by RMU Extract.
$ RMU/EXTRACT/ITEM=UNLOAD-
_$ /OPTION=(NOHEADER,FULL,MATCH:WORK_STATUS%) sql$database
$ CREATE WORK_STATUS.COLUMNS
! Columns list for table WORK_STATUS
! in DISK1:[DATABASES]MF_PERSONNEL.RDB
! Created by RMU Extract for Oracle Rdb V7.2-00 on 1-JAN-2006 20:50:25.33
STATUS_CODE
STATUS_NAME
STATUS_TYPE
$ RMU/UNLOAD -
DISK1:[DATABASES]MF_PERSONNEL.RDB -
/FIELDS="@WORK_STATUS.COLUMNS" -
WORK_STATUS -
WORK_STATUS.UNL
$
$ EXIT
$ RMU/EXTRACT/ITEM=LOAD-
_$ /OPTION=(NOHEADER,FULL,MATCH:WORK_STATUS%) sql$database
$ RMU/LOAD -
/TRANSACTION_TYPE=EXCLUSIVE -
/FIELDS="@WORK_STATUS.COLUMNS" -
DISK1:[DATABASES]MF_PERSONNEL.RDB -
WORK_STATUS -
WORK_STATUS.UNL
$
$ EXIT
Example 24
The following example shows how to extract all constraints as an
ALTER TABLE statement.
$ rmu/extract/item=(notab,constr) db$:sql_personnel/opt=(nohead,mat=empl%,defer)
set verify;
set language ENGLISH;
set default date format 'SQL92';
set quoting rules 'SQL92';
set date format DATE 001, TIME 001;
attach 'filename $DISK1:[JONES]SQL_PERSONNEL.RDB';
alter table EMPLOYEES
add constraint EMP_SEX_VALUES
check((EMPLOYEES.SEX in ('M', 'F')
or (EMPLOYEES.SEX is null)))
initially deferred deferrable
add constraint EMP_STATUS_CODE_VALUES
check((EMPLOYEES.STATUS_CODE in ('0', '1', '2')
or (EMPLOYEES.STATUS_CODE is null)))
initially deferred deferrable
alter column EMPLOYEE_ID
constraint EMP_EMPLOYEE_ID_NOT_NULL
not null
initially deferred deferrable;
16 – Insert Optimizer Statistics
Inserts workload records into the RDB$WORKLOAD system relation.
16.1 – Description
When you enable and collect workload statistics, the system
table RDB$WORKLOAD is created and populated. (See Collect_
Optimizer_Statistics for details.) You can update or delete these
statistics using the RMU Collect Optimizer_Statistics command or
the RMU Delete Optimizer_Statistics command, respectively.
You might delete entries in the RDB$WORKLOAD table by accident
or you might delete them to test how effective it is to maintain
those particular workload statistics. If you decide that you
want to maintain those deleted statistics, you can insert them
with the RMU Insert Optimizer_Statistics command. To ensure that
you insert accurate values, always issue an RMU Show Optimizer_
Statistics command with the Log qualifier before you issue an
RMU Delete Optimizer_Statistics command. Refer to your generated
log file for the values you should specify with the RMU Insert
Optimizer_Statistics command.
In addition you can use the RMU Insert Optimizer_Statistics
command to create workload statistics in a copy of your master
database.
If you issue an RMU Collect Optimizer_Statistics command after
having issued an RMU Insert Optimizer_Statistics command,
statistics for the specified column groups are updated.
16.2 – Format
(B)0[mRMU Insert Optimizer_Statistics root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Column_Group=(Column-list) x None - Required Qualifier
/Duplicity_Factor=(floating-number) x /Duplicity_Factor=(1.0)
/[No]Log[=file-spec] x See description
/Null_Factor=(floating-number) x /Null_Factor=(0.0)
/Tables=(table-list) x None - Required Qualifier
16.3 – Parameters
16.3.1 – root-file-spec
root-file-spec
Specifies the database into which optimizer statistics are to be
inserted. The default file type is .rdb.
16.4 – Command Qualifiers
16.4.1 – Column Group
Column_Group=(column-list)
Specifies a list of columns that comprise a column group. You
must use the Tables qualifier to specify the table or tables with
which the columns are associated.
The Column_Group=(column-list) qualifier is a required qualifier.
16.4.2 – Duplicity Factor
Duplicity_Factor=(floating-number)
Specifies the value to be inserted in the RDB$DUPLICITY_FACTOR
column in the RDB$WORKLOAD table for the specified column group
and table (or tables). The minimum value is 1.0 and the maximum
value is the cardinality of the specified table. The default is
the Duplicity_Factor=(1.0) qualifier.
16.4.3 – Log
Log
Log=file-spec
Nolog
Specifies how the statistics inserted into the RDB$WORKLOAD
system table are to be logged. Specify the Log qualifier to have
the information displayed to SYS$OUTPUT. Specify the Log=file-
spec qualifier to have the information written to a file. Specify
the Nolog qualifier to prevent display of the information. If you
do not specify any of variation of the Log qualifier, the default
is the current setting of the DCL verify switch. (The DCL SET
VERIFY command controls the DCL verify switch.)
16.4.4 – Null Factor
Null_Factor=floating-number
Specifies the value to be inserted in the RDB$NULL_FACTOR column
in the RDB$WORKLOAD table for the specified column group and
table (or tables). The minimum value is 0.0 and the maximum value
is 1.0. The default is the Null_Factor=(0.0) qualifier.
16.4.5 – Tables
Table
Tables=(table-list)
Specifies the table or tables for which column group entries are
to be inserted.
If you issue an RMU Collect Optimizer_Statistics command after
you have inserted a workload column group into the RDB$WORKLOAD
system table, those statistics are collected.
The Tables=(table-list) qualifier is a required qualifier.
16.5 – Usage Notes
o To use the RMU Insert Optimizer_Statistics command for a
database, you must have the RMU$ANALYZE privilege in the root
file access control list (ACL) for the database or the OpenVMS
SYSPRV or BYPASS privilege.
o Cardinality statistics are automatically maintained by
Oracle Rdb. Physical storage and workload statistics are only
collected when you issue an RMU Collect Optimizer_Statistics
command. To get information about the usage of physical
storage and workload statistics for a given query, define
the RDMS$DEBUG_FLAGS logical name to be "O". For example:
$ DEFINE RDMS$DEBUG_FLAGS "O"
When you execute a query, if workload and physical statistics
have been used in optimizing the query, you will see a line
such as the following in the command output:
~O: Workload and Physical statistics used
o The Insert Optimizer_Statistics command modifies the
RDB$LAST_ALTERED date of the RDB$WORKLOAD row so that it is
activated for use by the optimizer.
16.6 – Examples
Example 1
The following example:
1. Collects workload statistics for the JOB_HISTORY table using
the RMU Collect Optimizer_Statistics command
2. Deletes the statistics for one of the JOB_HISTORY workload
column groups
3. Inserts the statistics that were just deleted into the
RDB$WORKLOAD system table using the RMU Insert Optimizer_
Statistics command
4. Displays the current data stored in the RDB$WORKLOAD table for
the JOB_HISTORY table using the RMU Show Optimizer_Statistics
command
$ RMU/COLLECT OPTIMIZER_STATISTICS MF_PERSONNEL.RDB -
_$ /TABLE=(JOB_HISTORY)/STATISTICS=(WORKLOAD)/LOG
Start loading tables... at 3-JUL-1996 10:54:04.16
Done loading tables.... at 3-JUL-1996 10:54:04.69
Start collecting workload stats... at 3-JUL-1996 10:54:06.76
Maximum memory required (bytes) = 6810
Done collecting workload stats.... at 3-JUL-1996 10:54:07.64
Start calculating stats... at 3-JUL-1996 10:54:07.84
Done calculating stats.... at 3-JUL-1996 10:54:07.86
Start writing stats... at 3-JUL-1996 10:54:09.34
---------------------------------------------------------------------
Optimizer Statistics collected for table : JOB_HISTORY
Workload Column group : EMPLOYEE_ID
Duplicity factor : 2.7400000
Null factor : 0.0000000
Workload Column group : EMPLOYEE_ID, JOB_CODE, JOB_START,
JOB_END, DEPARTMENT_CODE, SUPERVISOR_ID
Duplicity factor : 1.5930233
Null factor : 0.3649635
Done writing stats.... at 3-JUL-1996 10:54:09.90
$ RMU/DELETE OPTIMIZER_STATISTICS MF_PERSONNEL.RDB -
_$ /TABLE=(JOB_HISTORY)/COLUMN_GROUP=(EMPLOYEE_ID,JOB_CODE, -
_$ JOB_START,JOB_END,DEPARTMENT_CODE,SUPERVISOR_ID)/LOG
Changing RDB$SYSTEM area to READ_WRITE.
Workload column group deleted for JOB_HISTORY : EMPLOYEE_ID,
JOB_CODE, JOB_START, JOB_END, DEPARTMENT_CODE,
SUPERVISOR_ID
$ !
$ RMU/INSERT OPTIMIZER_STATISTICS MF_PERSONNEL.RDB -
_$ /TABLE=(JOB_HISTORY) /COLUMN_GROUP=(EMPLOYEE_ID,JOB_CODE, -
_$ JOB_START,JOB_END,DEPARTMENT_CODE,SUPERVISOR_ID) -
_$ /DUPLICITY_FACTOR=(1.5930233)/NULL_FACTOR=(0.3649635)/LOG
Changing RDB$SYSTEM area to READ_WRITE.
Workload column group inserted for JOB_HISTORY : EMPLOYEE_ID,
JOB_CODE, JOB_START, JOB_END, DEPARTMENT_CODE,
SUPERVISOR_ID
$ !
$ RMU/SHOW OPTIMIZER_STATISTICS MF_PERSONNEL.RDB -
_$ /TABLE=(JOB_HISTORY)/STATISTICS=(WORKLOAD)/LOG
--------------------------------------------------------------------
Optimizer Statistics for table : JOB_HISTORY
Workload Column group : EMPLOYEE_ID
Duplicity factor : 2.7400000
Null factor : 0.0000000
First created time : 3-JUL-1996 10:37:36.43
Last collected time : 3-JUL-1996 10:54:09.62
Workload Column group : EMPLOYEE_ID, JOB_CODE, JOB_START,
JOB_END, DEPARTMENT_CODE, SUPERVISOR_ID
Duplicity factor : 1.5930233
Null factor : 0.3649635
First created time : 3-JUL-1996 10:57:47.65
Last collected time : 3-JUL-1996 10:57:47.65
17 – Librarian
Allows you to list or remove backed up Oracle Rdb databases from
a Librarian utility.
17.1 – Description
The RMU Librarian command allows you to list or remove backed up
Oracle Rdb databases from a Librarian utility that conforms to
the Oracle Media Manager interface.
You cannot perform both the list and remove operations within one
command.
17.2 – Format
(B)0[mRMU/Librarian backup-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/List[=(Output=file-name),(options,...)] x See Description
/Remove[=[No]Confirm,(options,...)] x See Description
17.3 – Parameters
17.3.1 – backup-file-spec
Identifies the backed up Oracle Rdb database previously stored
in the Librarian utility. Use the same backup file name that
was used in the Oracle RMU Backup command. A default file type
of .RBF is assumed if none is specified. Any device, directory,
or version number specified with the backup file name will be
ignored.
If the Librarian utility supports wild card characters, you
can use them for the backup file name when you use the List
qualifier. Wild card characters cannot be used with the Remove
qualifier.
17.4 – Command Qualifiers
17.4.1 – List
List
List=Output=file-name
List=options
Allows you to display a backed up Oracle Rdb database stored
in a Librarian utility. If you use the List qualifier without
the Output option, the output is sent to the default output
device. If you use the Output option, the output is sent to the
specified file. All data streams existing in the Librarian that
were generated for the specified backup name will be listed. The
information listed for each data stream can include:
o The backup stream name based on the backup file.
o Any comment associated with the backup stream name.
o The creation method associated with the backup stream name.
This will always be STREAM.
o The creation data and time when the stream was backed up to
the Librarian.
o Any expiration date and time specified for deletion of the
stream by the Librarian.
o The media sharing mode that indicates if the media can be
accessed concurrently or not.
o The file ordering mode that indicates if files on the media
can be accessed in random order or sequential order.
o Any volume labels for the media that contain the backup
stream.
Not all of these items will be listed depending on the particular
Librarian utility.
The List qualifier can accept the following options:
o Trace_File=file-specification
The Librarian application writes trace data to the specified
file.
o Level_Trace=n
Use this option as a debugging tool to specify the level of
trace data written by the Librarian application. You can use a
pre-determined value of 0, 1, or 2, or a higher value defined
by the Librarian application. The pre-determined values are:
- Level 0 traces all error conditions. This is the default.
- Level 1 traces the entry and exit from each Librarian
function.
- Level 2 traces the entry and exit from each Librarian
function, the value of all function parameters, and the
first 32 bytes of each read/write buffer, in hexadecimal.
o Logical_Names=(logical-name=equivalence-value,...)
Use this option to specify a list of process logical names
that the Librarian application can use to specify catalogs
or archives for listing or removing backup files, Librarian
debug logical names, and so on. See the specific Librarian
documentation for the definition of the logical names. The
list of process logical names is defined by Oracle RMU prior
to the start of the list or remove operation.
17.4.2 – Remove
Remove
Remove=Confirm
Remove=Noconfirm
Remove=options
Allows you to delete all data streams existing in the Librarian
that were generated for the specified backup file. This command
should be used with caution. You should be sure that a more
recent backup for the database exists in the Librarian under
another name before you use this command. The Confirm option is
the default. It prompts you to confirm that you want to delete
the backup from the Librarian. If you do not want to be prompted,
use the Noconfirm option. The deletion will be performed with no
confirmation prompt.
The Remove qualifier can accept the following options:
o Trace_File=file-specification
The Librarian application writes trace data to the specified
file.
o Level_Trace=n
Use this option as a debugging tool to specify the level of
trace data written by the Librarian application. You can use a
pre-determined value of 0, 1, or 2, or a higher value defined
by the Librarian application. The pre-determined values are:
- Level 0, trace all error conditions, is the default.
- Level 1 traces the entry and exit from each Librarian
function.
- Level 2 traces the entry and exit from each Librarian
function, the value of all function parameters, and the
first 32 bytes of each read/write buffer, in hexadecimal.
o Logical_Names=(logical-name=equivalence-value,...)
You can use this option to specify a list of process logical
names that the Librarian application can use to specify
catalogs or archives for listing or removing backup files,
Librarian debug logical names, and so on. See the specific
Librarian documentation for the definition of logical names.
The list of process logical names is defined by Oracle RMU
prior to the start of the list or remove operation.
18 – Load
There are two RMU Load commands, as follows:
o An RMU Load command without the Plan qualifier allows you to
load data into the database you specify as a parameter to the
Load command.
o An RMU Load command with the Plan qualifier allows you to
execute a plan file you specify as a parameter to the Load
command.
18.1 – Database
Loads data into the tables of the database.
You can use the RMU Load command to:
o Perform the initial load of an Oracle Rdb database.
o Reload a table after performing a restructuring operation.
o Load an archival database.
o Move data from one database to another.
o Load security audit records from an OpenVMS security audit
table into the database being audited, or into a different
database than the one being audited.
o Load additional rows into an existing table. (However, note
that it cannot be used to modify existing rows.)
o Import data into a database from an application that generates
RMS files.
You can load data using either of the following two methods:
o A single-process method
This was the only method available prior to Oracle Rdb V7.0.
The single process method uses one process to both read the
input file and load the target table.
o A multiprocess method, also called a parallel load
The parallel load method, which you specify with the Parallel
qualifier, enables Oracle RMU to use your process to read
the input file and use one or more executors (subprocesses
or detached slave process, depending on additional factors)
to load the data into the target table. This results in
concurrent read and write operations, and in many cases,
substantially improves the performance of the load operation.
By default, Oracle RMU sets up a parallel load operation as
follows:
o Your process serves as the load operation execution manager.
o Each storage area (partition) in the table being loaded is
assigned an executor.
o Each executor is assigned four communications buffers.
(You can override this default with the Buffer_Count option to
the Parallel qualifier.)
o Each communications buffer holds the number of rows defined by
the Row_Count qualifier.
Once the executors and communications buffers are set up, the
parallel load operation processes the input file as follows:
1. Your process begins reading the input file and determines the
target storage area for each row in the input file.
2. Your process places each row in the communications buffer for
the executor assigned to the data's target storage area.
3. When an executor's first communications buffer becomes full,
it begins loading the data into the target storage area.
4. If your process has another portion of data ready for a given
executor before that executor has completed loading its first
buffer of data, your process places the next portion of data
in the second communications buffer for that executor.
5. Each executor, concurrent with each of the other executors,
loads the data from its buffers.
6. Your process continues reading, sorting, and assigning data to
each executor (by placing it in that executor's communication
buffer) until all the data from the input file has been
sorted, assigned, and loaded.
The Row_Count qualifier and Parallel qualifier (which provides
the Executor_Count and Buffer_Count options) give you the ability
to fine tune the Parallel load operation.
See the Oracle Rdb Guide to Database Design and Definition for
tips on optimizing the performance of the load operation.
18.1.1 – Description
The RMU Load command accepts the following five types of data
files, all of which, except the security audit journal, have the
file extension .unl:
o Text data file
o Delimited text data file
o Binary data file
o Specially structured file
o OpenVMS security audit journal file
With the exception of the specially structured file and the
security audit journal file, you must provide a record definition
file (.rrd) on the RMU Load command line to load these data
files. The record definition file provides Oracle RMU with a
description of (metadata for) the data you are loading.
The following list describes the additional requirements for
loading each of these types of files:
o Text data file
To load a text data file (.unl), you must specify the Record_
Definition qualifier with the Format=Text option.
The following command loads text data (employees.unl) into
the EMPLOYEES table of the mf_personnel database. The
employees.rrd file provides the record definition for the
data in employees.unl
$ RMU/LOAD/RECORD_DEFINITION=(FILE=employees.rrd, FORMAT=TEXT) -
_$ mf_personnel EMPLOYEES employees.unl
You can generate an appropriate .rrd file for the preceding
example by issuing the following command:
$ RMU/UNLOAD/RECORD_DEFINITION=(FILE=employees.rrd, FORMAT=TEXT) -
_$ mf_personnel EMPLOYEES unload.unl
o Delimited text data files
To load delimited text data files (.unl) you must
specify the Record_Definition qualifier with the with the
Format=Delimited_Text option.
The following command loads delimited text data
(employees.unl) into the EMPLOYEES table of the mf_personnel
database. The employees.rrd file describes the format of
employees.unl
$ RMU/LOAD/RECORD_DEFINITION=(FILE=employees.rrd, -
_$ FORMAT=DELIMITED_TEXT, TERMINATOR="#") -
_$ mf_personnel EMPLOYEES employees.unl
You can generate an appropriate .rrd file for the preceding
example by issuing the following command:
$ RMU/UNLOAD/RECORD_DEFINITION=(FILE=employees.rrd, -
_$ FORMAT=DELIMITED_TEXT) mf_personnel EMPLOYEES unload.unl
o Binary data files
To load binary data files, you must ensure that the records
you load match the record definition in both size and data
type. The records must all have the same length and the data
in each record must fill the entire record. If the last field
is character data and the information is shorter than the
field length, the remainder of the field must be filled with
spaces. You cannot load a field that contains data stored in
packed decimal format.
The following command loads binary data (employees.unl)
into the EMPLOYEES table of the mf_personnel database. The
employees.rrd file describes the format of employees.unl.
$ RMU/LOAD/RECORD_DEFINITION=(FILE=employees.rrd) mf_personnel -
_$ EMPLOYEES employees.unl
You can generate an appropriate .rrd file for the preceding
example by issuing the following command:
$ RMU/UNLOAD/RECORD_DEFINITION=(FILE=employees.rrd) mf_personnel -
_$ EMPLOYEES unload.unl
o Specially structured binary files that include both data and
metadata.
To load the specially structured binary files (created by the
RMU Unload command without the Record_Definition qualifier)
you must specify the file (.unl) created by the RMU Unload
command.
The following command loads the binary data contained in
the employees.unl file into the EMPLOYEES table of the mf_
personnel database. The record definition information is
contained within the binary .unl file.
$ RMU/LOAD MF_PERSONNEL EMPLOYEES employees.unl
This specially structured employees.unl file is created with
the following RMU Unload command:
$ RMU/UNLOAD MF_PERSONNEL EMPLOYEES employees.unl
o Security audit journal files
To load the records from a security audit journal file
maintained by the OpenVMS operating system, you must decide
whether to load records into the same database for which
security audit journal records are being recorded or to load
them into a separate database. In either case you do not
need to specify a record definition file; use of the Audit
qualifier indicates to Oracle RMU that the record definition
is that of the security audit journal file.
The following command loads the records from the security
audit journal file (with a logical name of SECURITY_AUDIT) for
the mf_personnel database into the AUDIT_TABLE table of the
mf_personnel database:
$ RMU/LOAD/AUDIT MF_PERSONNEL.RDB AUDIT_TABLE -
_$ SECURITY_AUDIT
This example loads the records from the security audit journal
file (with a logical name of SECURITY_AUDIT) for the mf_
personnel database into the AUDIT_TABLE table of the audit
database:
$ RMU/LOAD/AUDIT=DATABASE_FILE=MF_PERSONNEL.RDB AUDIT.RDB -
_$ AUDIT_TABLE SECURITY_AUDIT
See the Usage Notes for more detailed information on loading
security audit journal records and the file name of the
security audit journal.
In all cases where you specify a record definition file (.rrd),
the record definition file and the database definition of the
table being loaded must match in the number of specified fields
and the data type of each field. If the data you want to load
has more fields than the database table definition specifies,
you can still load the data, but you must use the FILLER keyword
with the field definition in your .rrd file to represent the
additional field. See Example 15 in the Examples help entry under
this command.
By default, the table specified in the RMU Load command is
reserved for PROTECTED WRITE.
Data Type Conversions Performed by Oracle Rdb shows the data type
conversions that can occur while you are performing a load or
unload operation.
Table 11 Data Type Conversions Performed by Oracle Rdb
Original Data
Type New Data Type
TINYINT INTEGER, QUADWORD, SMALLINT, FLOAT, DOUBLE
PRECISION, VARCHAR, CHAR
SMALLINT INTEGER, QUADWORD, FLOAT, DOUBLE PRECISION,
VARCHAR, CHAR
INTEGER SMALLINT, QUADWORD, FLOAT, DOUBLE PRECISION,
VARCHAR, CHAR
QUADWORD SMALLINT, INTEGER, FLOAT, DOUBLE PRECISION,
VARCHAR, CHAR
FLOAT DOUBLE PRECISION, CHAR, and VARCHAR
DOUBLE FLOAT, CHAR, and VARCHAR
PRECISION
DATE CHAR or VARCHAR
TIME CHAR or VARCHAR
TIMESTAMP CHAR or VARCHAR
INTERVAL CHAR or VARCHAR
CHAR FLOAT, DOUBLE PRECISION, DATE, TIME, TIMESTAMP,
INTERVAL, VARCHAR, SMALLINT, INTEGER, or QUADWORD
See the Oracle Rdb SQL Reference Manual for a description of
these data types.
18.1.2 – Format
(B)0[m RMU/Load root-file-spec table-name input-file-name
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Audit[=Database_File=db-name] x No audit table loaded
/Buffers=n x See description
/Commit_Every=n x See description
/[No]Constraints[=Deferred] x /Constraints
/Corresponding x See description
/[No]Defer_Index_Updates x /Nodefer_Index_Updates
/[No]Dialect=(dialect-opts) x /Dialect=SQL99
/[No]Execute x /Execute
/Fields=(column-name-list) x See description
/List_Plan=output-file x See description
/[No]Log_Commits x /Nolog_Commits
/[No]Match_Name=table-name x /Nomatch_Name
/Parallel[=(options)] x See description
/[No]Place x /Noplace
/Record_Definition= x See description
({File|Path}=name[,options]) x
(B)0[m /[No]Restricted_Access x /Norestricted_Access
/Row_Count=n x See description
/[No]Skip=n x /Noskip
/Statistics=(stat-opts) x See description
/Transaction_Type=Share-mode x Protected
/[No]Trigger_Relations[=(table_name_list)] x /Trigger_Relations
/[No]Virtual_Fields[=[No]Automatic] x /Novirtual_Fields
18.1.3 – Parameters
18.1.3.1 – root-file-spec
The file specification for the database root file into which the
table will be loaded. The default file extension is .rdb.
18.1.3.2 – table-name
The name of the table to be loaded, or its synonym.
When the Audit qualifier is specified, the table-name parameter
is the name of the table in which you want the security audit
journal records to be loaded. If the table does not exist, the
RMU Load command with the Audit qualifier creates the table and
loads it. If the table does exist, the RMU Load command with the
Audit qualifier loads the table.
18.1.3.3 – input-file-name
The name of the file containing the data to be loaded. The
default file extension is .unl.
When the Audit qualifier is specified, the input-file-name
parameter is the name of the journal containing the audit record
data to be loaded. The default file extension is .AUDIT$JOURNAL.
You can determine the name of the security audit journal by using
the DCL SHOW AUDIT/JOURNAL command.
18.1.4 – Command Qualifiers
18.1.4.1 – Audit
Audit
Audit=Database_File=db-name
Allows you to load a database's security audit records from an
OpenVMS security audit journal into one of the following:
o A table in the database being audited
Specify the Audit qualifier without the Database_File option
to indicate that you want the security audit records to be
loaded into the database specified with the root-file-spec
parameter.
o A table in a different database than the one being audited
Specify the Audit=Database_File=db-name qualifier to indicate
that you want to security audit records for the database
specified with the root-file-spec command parameter to be
loaded into the database specified with the db-name option
parameter.
If you specify the Audit qualifier, you cannot specify the Fields
or Trigger_Relations qualifiers.
In addition you cannot specify the Audit qualifier with a
parallel load operation. If you attempt to do so, Oracle RMU
issues a warning and performs a single-executor load operation.
18.1.4.2 – Buffers
Buffers=n
Specifies the number of database buffers used for storing data
during the load operation. If no value is specified, the default
value for the database is used. (The default value for the
database is defined by the logical name RDM$BIND_BUFFERS, or
if the logical is not defined, can be determined by using the
RMU Dump command with the Header qualifier. The RDM$BIND_BUFFERS
logical name, if defined, overrides the value displayed with the
RMU Dump command.) Fewer I/O operations are required if you can
store as much data as possible in memory when many indexes or
constraints are defined on the target table. Therefore, specify
more buffers than allowed by the default value to increase the
speed of the load operation.
See the Oracle Rdb7 Guide to Database Performance and Tuning
for detailed recommendations on setting the number of database
buffers.
18.1.4.3 – Commit Every
Commit_Every=n
Specifies the frequency with which Oracle Rdb commits the data
being loaded. For a single-executor load operation, Oracle Rdb
commits the data after every n records that are stored. The
default is to commit only after all records have been stored.
For a parallel load operation, the Commit_Every qualifier
applies separately to each of the executors (processes) used.
For example, if five parallel processes are running, and the
Commit_Every=2 qualifier is specified, Oracle RMU commits data
for each process after it has stored 2 records. This means that
if the Commit_Every=1000 qualifier is specified when you load one
million records with 10 parallel processes, the .ruj files will
store up to 10,000 rows of before-image data.
If you specify the Defer_Index_Updates qualifier and a high value
for the Commit_Every qualifier, memory requirements are high. See
the description of the Defer_Index_Updates qualifier for details.
Commit operations may occur more frequently than you specify
under certain conditions. See the description of the Defer_Index_
Updates qualifier for details.
To determine how frequently you should commit data, decide how
many records you are willing to reload if the original load
operation fails. If you use the Statistics=On_Commit qualifier,
you receive a message indicating the number of records loaded at
each commit operation. Then, if a failure occurs, you know where
to resume loading.
If you specify the Place qualifier and a failure occurs, resume
loading at the point of the previous commit, instead of the
record number of the last successful commit. The Place qualifier
restructures the .unl file prior to loading, so the record number
on which the load operation failed does not correspond to the
same number in the original .unl file.
18.1.4.4 – Constraints
Constraints
Constraints=Deferred
Noconstraints
Specifies when or if constraints are evaluated for data
being loaded. If you specify the Constraints qualifier,
constraints are evaluated as each record is loaded. If you
specify the Noconstraints qualifier, constraints are not
evaluated at all during the load operation. If you specify the
Constraints=Deferred qualifier, constraints are evaluated after
all data from the input file has been loaded.
The default is the Constraints qualifier.
Oracle Corporation recommends that you accept the default for
most load operations. The Noconstraints and Constraints=Deferred
qualifiers are useful when load performance is your highest
priority, you fully understand the constraints defined for
your database, and you are familiar enough with the input data
to be fairly certain that loading that data will not violate
constraints; then you might use these qualifiers as follows:
o Constraints=Deferred
This qualifier is particularly useful for improving
performance when you are loading data into a new table.
Oracle Corporation strongly recommends that you issue an
RMU Verify command with the Constraints qualifier when the
load operation has completed. Note, however, that issuing the
RMU Verify command after the load operation has completed
takes about the same amount of time that would have been
spent had you specified the RMU Load command with the
Constraints qualifier. In other words, by specifying the
Constraints=Deferred qualifier, you are only delaying when
the constraint verification will take place.
o Noconstraints
This qualifier is particularly useful when you are performing
a parallel load operation with the Defer_Index_Updates
qualifier. Oracle Corporation strongly recommends that you
issue an RMU Verify command with the Constraints qualifier
when the load operation has completed. Note, however, that
when you issue the RMU Verify command with the Constraints
qualifier, all rows in the table are checked for constraint
violations, not just the rows that are loaded.
Consider the following before issuing an RMU Load command with
the Noconstraints or Constraints=Deferred qualifier:
o If a table is populated with data prior to a load operation,
it is less expensive to check constraints on each record
as it is being loaded, than to verify constraints on the
entire table after the set of new records has been loaded.
For example, assume you load 200 new records into a table that
currently holds 2,000 records and one constraint is defined
on the table. If you verify constraints as the records are
being loaded, constraint validation is performed 200 times.
If you wait and verify constraints after the load operation
completes, constraint verification must be performed for 2,200
records
o If an RMU Verify command reveals that constraint violations
occurred during the load operation, you must track down those
records and either remove them or make other modifications
to the database to restore the data integrity. This can be a
time-consuming process.
Also consider a situation where all of the following are true:
o You perform a parallel load operation
o You specify the Constraints qualifier
o The table into which you are loading data has a constraint
defined on it
o The constraint defined on the table was defined as deferred
o Constraint evaluation fails during the load operation
In a case such as the preceding, you can not easily determine
which rows were loaded and which were not. Therefore Oracle
Corporation recommends that if deferred constraints are defined
on a table, then you should also specify the Constraints=Deferred
qualifier in your parallel load command. When you follow this
recommendation, the records that violate the constraint are
stored in the database. When the load operation completes, you
can remove from the database those records that violate the
constraint.
See Example 6 in Verify for an example of the steps to take if
an RMU Verify command reveals that an RMU Load command has stored
data that violates constraints into your database.
18.1.4.5 – Corresponding
Corresponding
Loads fields into a table from the .unl file by matching the
field names in the .rrd file to the column names in the table.
The Corresponding qualifier makes it more convenient to unload,
restructure, and reload a table.
For example, if the columns in the table appear in the order:
EMPLOYEE_ID, LAST_NAME, FIRST_NAME, but the data in your .unl
file appears in the order: EMPLOYEE_ID, FIRST_NAME, LAST_NAME,
and your .rrd file lists the fields in the order: EMPLOYEE_ID,
FIRST_NAME, LAST_NAME, you can use the Corresponding qualifier
to load the data in your .unl file correctly. (You could also use
the Fields qualifier to accomplish the same task, but this can
get tedious if there are numerous fields.)
The .unl file must contain data for each field in the database
into which it is being loaded; if it does not, you should use the
Fields qualifier.
If the Corresponding qualifier is omitted, the RMU Load command
loads the data into database fields by the ordinal position in
which they appear in the .unl, not by the column name described
in the .rrd file.
The Corresponding qualifier cannot be used with either the Fields
or Audit qualifiers.
18.1.4.6 – Defer Index Updates
Defer_Index_Updates
Nodefer_Index_Updates
The Defer_Index_Updates qualifier specifies that non-unique
indexes (other than those that define the placement information
for data in a storage area) will not be rebuilt until commit
time.
Use of this qualifier results in less I/O and fewer lock
conflicts than when index builds are not deferred, but results
in a total failure of a load operation if any lock conflicts
are encountered. In such a case, the entire load operation is
rolled back to the previous commit and you must repeat the load
operation. (Record insertion recommences at the beginning of
the input file). For this reason, you should only use the Defer_
Index_Updates qualifier when all of the following are true:
o You specify the Noconstraints qualifier (or you have dropped
constraints, or no constraints are defined on the table).
o You have dropped triggers from the table (or triggers are not
defined for the table).
o No other users are accessing the table being loaded.
Also be aware that required virtual memory can be quite large
when you defer index updates. Required virtual memory is directly
proportional to the following:
o The length of the Ikeys in the indexes being deferred
o The number of indexes being deferred
o The value for n specified with the Commit_Every qualifier
You can estimate the amount of virtual memory required for each
deferred index using the following formula, where:
o n = the value specified with the Commit_Every qualifier
o I = (length of the Ikey + 50)
n * (I * number_defered_ikeys)
The Nodefer_Index_Updates qualifier is the default. When you
specify the Nodefer_Index_Updates qualifier (or accept the
default), both the indexes that define the placement information
for data in a storage area and any other indexes defined on the
table being loaded are rebuilt at verb time.
This can result in a managed deadlock situation when the Parallel
qualifier is specified. The following describes such a scenario:
o Executor_1 locks index node A in exclusive mode
o Executor_2 locks index node B in exclusive mode
o Executor_1 requests a lock on index node B
o Executor_2 requests a lock on index node A
In such a situation, Oracle Rdb resolves the deadlock by
directing one of the executors to commit the data it has already
stored. This resolves the deadlock situation and the load
operation continues.
18.1.4.7 – Dialect
Dialect
Nodialect
The Dialect qualifier is used to control whether truncation
of string data during the loading of data is reported or not.
This loss of data might be significant. RMU Load defaults to SQL
dialect SQL99 which implicitly checks for and reports truncations
during INSERT operations.
o /NODIALECT, /DIALECT=SQL89 or /DIALECT=NONE will not report
any truncation errors, which is the "old" behavior of Rdb
(prior to July 2008).
o /DIALECT=SQL99 (the default) will enable reporting of
truncation errors. Note that truncation occurs if non-space
characters are discarded during the insert.
18.1.4.8 – Execute
Execute
Noexecute
The Execute and Noexecute qualifiers are used with the List_Plan
qualifier to specify whether or not the generated plan file is
to be executed. The Noexecute qualifier specifies that the plan
file should be created but should not be executed. Regardless of
whether you use the Noexecute or Execute qualifier (or accept the
default), Oracle RMU performs a validity check on the RMU Load
command you specify.
The validity check determines such things as whether the
specified table is in the specified database, the .rrd file (if
specified) matches the table, and that the number of columns
specified with the Fields qualifier matches the number of
columns in the .unl file. The validity check does not determine
such things as whether your process and global page quotas are
sufficient.
By default, the plan file is executed when an RMU Load command
with the List_Plan qualifier is issued.
18.1.4.9 – Fields
Fields=(column-name-list)
Specifies the column or columns of the table to be loaded into
the database. If you list multiple columns, separate the column
names with a comma, and enclose the list of column names within
parentheses. Also, this qualifier specifies the order of the
columns to be loaded if that order differs from the order defined
for the table. The number and data type of the columns specified
must agree with the number and data type of the columns in the
.unl file. The default is all columns defined for the table in
the order defined.
If you specify an options file in place of a list of columns, and
the options file is empty, the RMU Load command loads all fields.
18.1.4.10 – List Plan
List_Plan[=output-file]
Specifies that Oracle RMU should generate a plan file and write
it to the specified output file. A plan file is a text file that
contains all the qualifiers specified on the RMU Load command
line. In addition, it specifies the executor names (if you are
performing a parallel load operation), the directory for the .ruj
files, the exception files, and the file created by the Place_
Only qualifier (if specified).
Oracle RMU validates the Oracle RMU command prior to generating
the plan file to ensure that an invalid plan file is not created.
(This is true regardless of whether or not you specify the
Noexecute qualifier.) For example, the following command is
invalid and returns an error message because it specifies
conflicting qualifiers (Corresponding and Fields):
$ RMU/LOAD/RECORD_DEF=FILE=NAMES.RRD/CORRESPONDING -
_$ /FIELDS=(LAST_NAME, FIRST_NAME)/LIST_PLAN=my_plan.plan -
_$ MF_PERSONNEL.RDB EMPLOYEES NAMES.UNL
%RMU-F-CONFLSWIT, conflicting options CORRESPONDING and FIELDS...
See the description of the Execute qualifier for a description
of what items are included when Oracle RMU validates the RMU
Load command. See the Examples section for a complete example and
description of a plan file.
You can use the generated plan as a starting point for building a
load operation that is tuned for your particular configuration.
The output file can be customized and then used with subsequent
load operations as the parameter to the RMU Load Plan command.
See Load Plan for details.
If you want to create only a load plan file and do not want
to execute the load plan when the RMU Load command is issued,
specify the Noexecute qualifier. When you specify the Noexecute
qualifier, you must specify a valid Oracle RMU command.
One way to prototype a plan file prior to creating a potentially
very large .unl file is to specify the List_Plan qualifier and
the Noexecute qualifier along with a valid record definition
(.rrd) file and an empty .unl file on the RMU Load command
line. The .rrd file contains the information Oracle RMU needs
to perform the validation of the plan file; however, because data
is not loaded when you specify the Noexecute qualifier, Oracle
RMU does not attempt to load the .unl file. Note, however, that
you cannot specify the Fields qualifier when using this strategy.
(When you specify the Fields qualifier, Oracle RMU checks to make
sure the number of columns specified with the Fields qualifier
match the number of columns specified in the .unl file.)
If you do not specify a file extension, the default file
extension for the plan file is .plan.
18.1.4.11 – Log Commits
Log_Commits
Nolog_Commits
Causes a message to be printed after each commit operation. In
the case of a parallel load, a message is printed after each
executor commits.
The default is the Nolog_Commits qualifier, where no message is
printed after individual commit operations. The Nolog_Commits
qualifier does, however, cause a commit operation total to be
printed after the operation completes or generates an error.
18.1.4.12 – Match Name
Match_Name=table-name
Nomatch_Name
Specifies the table name to be read. Tables exported by SQL into
an interchange file can be individually loaded into a database.
The default behavior of the RMU Load command is to locate and
load the first set of table data in the unload file. If this is
not the table you want, you can use the Match_Name qualifier to
specify a different table name. If the Match_Name qualifier is
used without a table-name, Oracle RMU assumes the name of the
table being loaded is also the name of the table in the source
data file. The default is the Nomatch_Name qualifier.
18.1.4.13 – Parallel
Parallel[=(options)]
Specifies a parallel load operation. A parallel load operation is
especially effective when you have large partitioned tables that
do not contain segmented strings and for which no constraints or
triggers are defined.
If you specify the Parallel qualifier without any options, your
load operation is assigned one executor and four communications
buffers for that executor. A communications buffer is used for
communications between your process and the executors.
If you want to assign additional executors or communications
buffers, or both, use one or both of the following options:
o Buffer_Count=n
Allows you to specify the number of communications buffers
assigned to each executor in a parallel load operation.
Do not confuse this with the Buffers=n qualifier. The
Buffers=n qualifier specifies the number of database buffers
to use during the load operation.
o Executor_Count=n
Allows you to specify the number of worker processes to
be assigned to the load operation. Ideally, the number of
executors should be equal to the number of table partitions.
You should not assign a greater number of executors than
the number of table partitions. If a table is randomly or
vertically partitioned, Oracle RMU creates only one executor,
regardless of the number you specify.
If the user account's MAXDETACH UAF value is greater than 0,
then executors are created as detached processes. If there
is no MAXDETACH value set, then executors are created as
subprocesses. (A MAXDETACH value = 0 equates to unlimited
detached processes.)
At the end of each load operation, Oracle RMU displays summary
statistics for each executor in the load operation and the main
process. Look at the "Idle time" listed in the statistics at the
end of the job to detect data skew and look at "Early commits" to
detect locking contention.
If some executors have a large amount of idle time, you likely
have data that is skewed. Ideally, data loaded with the Parallel
qualifier should appear in random order within the .unl file.
Data that is already in partition order when you attempt to
perform a parallel load operation results in high idle time for
each executor and thus defeats the advantages of a parallel load
operation.
The summary statistics also list the number of records read from
the input file, the number of data records stored, and the number
of data records rejected. In most cases, the number of data
records rejected plus the number of data records stored equals
the number of data read from the input file. However, under the
following circumstances this equation does not hold:
o The parallel load operation aborts due to a duplicate record
that is not allowed.
o You did not specify an exception file.
Similarly if a load operation aborts due to a record in the input
file being improperly delimited for a delimited text load, the
records rejected plus the records stored do not equal the number
of records read from the input file.
You cannot use a parallel load operation to load list data
(segmented string) records or security audit records. If you
specify a parallel load operation and attempt to load list data
or security audit records, Oracle RMU returns a warning and
performs a single-process (non-parallel) load operation.
18.1.4.14 – Place
Place
Noplace
Sorts records by target page number before they are stored.
The Place qualifier automatically builds an ordered set of
database keys (dbkeys) when loading data and automatically stores
the records in dbkey order, sequentially, page by page. During
a parallel load operation, each worker executor builds its own
ordered set of dbkeys.
The number of work files used by the RMU Load command is
controlled by the RDMS$BIND_SORT_WORKFILES logical name. The
allowable values are 1 through 10 inclusive, with a default value
of 2. The location of these work files can be specified with
device specifications, using the SORTWORKn logical name (where n
is a number from 0 to 9). See the OpenVMS documentation set for
more information on using SORT/MERGE. See the Oracle Rdb7 Guide
to Database Performance and Tuning for more information on using
these Oracle Rdb logical names.
A significant performance improvement occurs when the records
are stored by means of a hashed index. By using the Commit_
Every qualifier with the Place qualifier, you can specify how
many records to load between COMMIT statements. Performance may
actually decrease when records are stored by means of a sorted
index.
The default is the Noplace qualifier.
18.1.4.15 – Record Definition
Record_Definition=(File=name[,options])
Record_Definition=(Path=name[,options])
Specifies the RMS record definition or the data dictionary record
definition to be used when data is loaded into the database. Use
the File=name parameter to specify an RMS record definition file;
use the Path=name parameter to specify that the record definition
be extracted from the data dictionary. (If the record definition
in the data dictionary contains variants, Oracle RMU will not be
able to extract it.)
The default file extension for the File=name parameter is
.rrd. The syntax for the .rrd file is similar to that used by
the Common Dictionary Operator (CDO) interface for the data
dictionary. You must define columns before you can define rows.
You can place only one column on a line. You can create a sample
.rrd file by using the RMU Unload command with the Record_
Definition qualifier. You must ensure that the record definition
in the .rrd file and the actual data are consistent with each
other. Oracle Rdb does not check to see that data types in the
record definition and the data match. See the help entry for
RRD_File_Syntax and the Oracle Rdb Guide to Database Design and
Definition for more information about the format of the .rrd
file.
You must specify either the File=name or Path=name parameter.
The options available are:
o Exception_File=exception-file
Allows you to write unloadable records to a single exception
file for a single-process load operation and into multiple
exception files for a parallel load operation. If you generate
a load plan for a parallel load operation, each executor is
assigned its own exception file. In this case, the exception-
file name you specify is given a different file extension for
each executor.
While Oracle RMU is loading data from an RMS file, if an
exception file is specified, then under certain circumstances
an invalid record in the input file does not cause the
RMU Load command to abort. Instead, Oracle RMU creates the
exception file (or files), writes the unloadable record into
this exception file (or files), and continues loading the
remaining records. This process occurs only if the data is
invalid on the actual insert, due to index, constraint, or
trigger errors. If the record has an invalid format in the RMS
file (for example, a missing delimiter), the exception file is
not used, and the load process aborts.
At the end of the load operation, you can process the
exception file (or files) to correct any problems, and then
reload directly from the exception file or files. The load
operation gives an informational message for each of the
unloadable records and also gives a summary of the number
of records stored and the number of records rejected.
All records that could not be loaded will be written into the
file or files as specified with the argument to the Exception_
File option. The default file extension for the exception
file is .unl for single-process loads; for parallel loads
the default extension is EXC_n, where n corresponds to the
executor number assigned by Oracle RMU. The exception file or
files are created only if there are unloadable records. If the
Exception_File option is not specified, no exception files are
created, and the load operation aborts at the first occurrence
of an exception.
However, note that if the Defer_Index_Updates qualifier is
specified, and a constraint violation or lock conflict occurs,
the load operation aborts when it attempts to commit the
transaction.
If the Defer_Index_Updates qualifier is not specified, records
that cause a constraint violation are written to the exception
file or files and the load operation continues loading the
remaining records.
o Format=Text
If you specify the Format=Text option, Oracle RMU converts all
data to printable text before loading it.
o If you do not specify the Format option, then Oracle RMU
expects to load a fixed-length binary flat file. The data
type of the fields must be specified in the .rrd file.
o Format=(Delimited_Text [,delimiter-options])
If you specify the Format=Delimited_Text option, the .rrd file
contains only text fields and specifies the maximum length of
the columns in the file containing delimited ASCII text. The
column values that are longer than those specified in the .rrd
file are truncated.
Note that DATE VMS types must be specified in the collatable
time format, which is yyyymmddhhmmsscc. For example, March 20,
1993 must be specified as: 1993032000000000.
Unless you specify the Format=Delimited_Text option,
delimiters are regarded as part of the data by Oracle RMU.
Example 13 in the Examples help entry under this command
demonstrates the Format=Delimited_Text option. Delimiter
options (and their default values if you do not specify
delimiter options) are as follows. Note that with the
exception of the Prefix and Suffix delimiter options, the
values specified must be unique. The Prefix and Suffix values
can be the same value as each other, but not the same as other
delimiter options. The Null string must also be unique.
- Prefix=string
Specifies a prefix string that begins any column value in
the ASCII input file. If you omit this option, the column
prefix is assumed to consist of a quotation mark (").
- Separator=string
Specifies a string that separates column values of a row.
If you omit this option, the column separator is assumed to
consist of a single comma (,).
- Suffix=string
Specifies a suffix string that ends any column value in
the ASCII input file. If you omit this option, the column
suffix is assumed to consist of a quotation mark (").
- Terminator=string
Specifies the row terminator that completes all the column
values corresponding to a row. If you omit this option, the
row terminator is assumed to be the end of the line.
- Null=string
Specifies a string, which when found in the input record,
is stored as NULL in the database column. This option is
only valid when the Delimited_Text option is specified
also.
The Null option can be specified on the command line as any
one of the following:
* A quoted string
* An empty set of double quotes ("")
* No string
If provided, the string that represents the null character
must be quoted on the Oracle RMU command line, however, it
must not be quoted in the input file. You cannot specify a
blank space or spaces as the null character.
If the final column or columns of a record are to be set
to NULL, you only have to specify data for the column up to
the last non-null column.
See the Examples section for an example of each of these
methods of storing the NULL value.
NOTE
The values of each of the strings specified in the
delimiter options must be enclosed by quotation
marks. Oracle RMU strips these quotation marks while
interpreting the values. If you want to specify a
quotation mark (") as a delimiter, specify a string
of four quotation marks. Oracle RMU interprets four
quotation marks as your request to use one quotation
mark as a delimiter. For example, Suffix = """".
Oracle RMU reads the quotation marks as follows:
o The first quotation mark is stripped from the
string.
o The second and third quotation marks are
interpreted as your request for one quotation mark
(") as a delimiter.
o The fourth quotation mark is stripped. This
results in one quotation mark being used as a
delimiter.
Furthermore, if you want to specify a quotation mark
as part of the delimiter string, you must use two
quotation marks for each quotation mark that you
want to appear in the string. For example, Suffix
**"**.
A delimiter of blank spaces enclosed in quotes is not
valid.
o Place_Only=sorted-placement-file
Allows you to sort the input file and create an output file
sorted in Placement order.
The input file can first be sorted into Placement order by
using the Place_Only option. The resultant file can then be
loaded with the Commit_Every qualifier to gain the required
efficiency. Do not use this option with a parallel load
operation; parallel load operations perform best when the
input file is not sorted.
The Place_Only option cannot be used with either the Commit_
Every qualifier or the Exception_File option (data is not
being stored in the database). However, the Place_Only option
requires the Place qualifier be specified (to sort the data).
The placement-sorted output file has the default file
extension of .unl.
Unless you specify the Null option (with the Format=Delimited_
Text parameter of the Record_Definition qualifier), any null
values stored in the rows of the tables being loaded are not
preserved. Therefore, use the Null option if you want to preserve
null values stored in tables and you are moving data within the
database or between databases.
See the examples in the Examples help entry under the RMU Unload
command for more information.
18.1.4.16 – Rms Record Def
Rms_Record_Def=(File=name[,options])
Rms_Record_Def=(Path=name[.options])
Synonymous with the Record_Definition qualifier. See the
description of the Record_Definition qualifier.
18.1.4.17 – Restricted Access
Restricted_Access
NoRestricted_Access
Allows a single process to load data and enables some
optimizations available only when restricted access is in use.
The default is Norestricted_Access.
If you are loading a table from an RMU Unload file which contains
LIST OF BYTE VARYING data, the Restricted_Access qualifier
reserves the LIST areas for EXCLUSIVE access. This reduces the
virtual memory used by long transactions during a load operation
and also eliminates I/O to the snapshot files for the LIST
storage areas.
The Restricted_Access and Parallel qualifiers are mutually
exclusive and cannot be specified together on the same RMU Load
command line or within a plan file. While RMU Load is running
with the Restricted_Access qualifier specified, no other user can
attach to the database.
18.1.4.18 – Row Count
Row_Count=n
Specifies that Oracle Rdb buffer multiple rows between the Oracle
Rdb server and the RMU Load process. The default for n is 500
rows; however, this value should be adjusted based on working
set size and length of loaded data. Increasing the row count may
reduce the CPU cost of the load operation. For remote databases,
this may significantly reduce network traffic for large volumes
of data because the buffered data can be packaged into larger
network packets.
The minimum value you can specify for n is 1. The default row
size is the value specified for the Commit_Every qualifier or
500, whichever is smaller.
18.1.4.19 – Skip
Skip=n
Noskip
Ignores the first n data records in the input file. Use this
qualifier in conjunction with the Commit_Every qualifier
when restarting an aborted load operation. An aborted load
operation displays a message indicating how many records have
been committed. Use this value for n. If you specify a negative
number, you receive an error message. If you specify a number
greater than the number of records in the file, you receive an
error message stating that no records have been stored. If you
do not specify a value, you receive an error message stating that
there is a missing keyword value.
Using the Skip qualifier to restart an aborted parallel load
operation is rarely useful. Because records are sorted by the
controller for each executor involved in the parallel load, there
are usually multiple sections of loaded and unloaded records in
the input file. Unless you are very familiar with the data you
are loading and how it is sorted by the controller, you risk
loading some records twice and not loading other records at all,
if you use the Skip qualifier when restarting an aborted parallel
load operation.
The default is the Noskip qualifier.
18.1.4.20 – Statistics
Statistics=(stat-opts)
Specifies that statistics are to be displayed at regular
intervals or each time a transaction commits, or both, so that
you can evaluate the progress of the load operation.
The stat-opts are the options you can specify with this
qualifier, namely: Interval=n, On_Commit, or both. If the
Statistics qualifier is specified, you must also specify at least
one option.
When the Statistics=(Interval=n) qualifier is specified, Oracle
RMU prints statistics every n seconds. The minimum value for n is
1.
When the Statistics=(On_Commit) qualifier is specified, Oracle
RMU prints statistics each time a transaction is committed.
If you specify both options, Statistics=(Interval=n, On_Commit),
statistics are displayed every n seconds and each time a
transaction commits.
The displayed statistics include:
o Elapsed time
o CPU time
o Buffered I/O
o Direct I/O
o Page faults
o Number of records loaded when the last transaction was
committed
o Number of records loaded so far in the current transaction
o If the Record_Definition=Exception_File option is also
specified, the following statistics are displayed also:
- Number of records rejected when the last transaction was
committed
- Number of records rejected so far in the current
transaction
o If the Parallel qualifier is specified also, the following
statistics are displayed also:
- Number of extra commits performed by executors
Extra commits are caused when the Oracle RMU directs your
process or the executors to commit a transaction earlier
than usual to avoid a hung load operation. For example, if
one executor is holding, but no longer needs a lock that
another executor requires, Oracle RMU directs the first
executor to commit its current transaction. By directing an
executor or executors to commit a transaction earlier than
usual, the locks under contention are released and the load
operation can proceed.
- The total number of executors
- The number of executors that are initializing, idle,
terminated, sorting, storing, committing, or executing
At any time during the load operation, you can press Ctrl/T to
display the current statistics.
18.1.4.21 – Transaction Type
Transaction_Type=share-mode
Specifies the share mode for the load operation. The following
share modes are available:
Batch_Update
Exclusive
Protected
Shared
You must specify a value if you use the Transaction_Type
qualifier. If you do not specify the Transaction_Type qualifier,
the default share mode is Protected.
If you specify a parallel load operation (with the Parallel
qualifier), and constraints are defined on the table you are
loading, Oracle Corporation recommends that you specify the
Shared share mode, or drop the constraints prior to starting a
parallel load operation, or specify the Noconstraints qualifier.
See the Usage Notes for details.
18.1.4.22 – Trigger Relations
Trigger_Relations[=(table-name-list)]
NoTrigger_Relations
You can use the Trigger_Relations qualifier in three ways:
o Trigger_Relations=(table-name-list)
Specifies the tables to be reserved for update. Using this
qualifier, you can explicitly lock tables that are updated
by triggers in store operations. If you list multiple tables,
separate the table names with a comma, and enclose the list of
table names within parentheses.
o Trigger_Relations
If you omit the list of table names, the tables updated by
triggers are locked automatically as required. This is the
default.
o NoTrigger_Relations
Disables triggers on the target table. This option requires
DROP privilege on the table being loaded. You cannot specify a
list of table names with this option.
If you specify a parallel load operation (with the Parallel
qualifier), and triggers are defined on the table you are
loading, Oracle Corporation recommends that you specify the
Shared share mode or drop the triggers prior to starting a
parallel load operation. See the Usage Notes for details.
The Trigger_Relations qualifier can be used with indirect file
references. See the Indirect-Command-Files help entry for more
information.
18.1.4.23 – Virtual Fields
Virtual_Fields(=[No]Automatic)
Novirtual_Fields
The Virtual_Fields qualifier is required to reload any AUTOMATIC
(or IDENTITY) fields with real data.
The Novirtual_Fields qualifier is the default, which is
equivalent to the Virtual_Fields=(Noautomatic) qualifier.
If you specify the Virtual_Fields qualifier without a keyword,
all fields are loaded except COMPUTED BY columns and calculated
VIEW columns.
Use this qualifier when restructuring a table and when you do
not wish the AUTOMATIC INSERT AS or IDENTITY column to recompute
new values. Instead, RMU will reload the saved values from a file
created by RMU/UNLOAD/VIRTUAL_FIELDS=AUTOMATIC.
18.1.5 – Usage Notes
o To use the RMU Load command for a database, you must have the
RMU$LOAD privilege in the root file access control list (ACL)
for the database or the OpenVMS SYSPRV or BYPASS privilege.
The appropriate Oracle Rdb privileges for accessing the
database tables involved are also required.
o To use the RMU Load command with the Audit qualifier, you must
have both of the following:
- The RMU$SECURITY privilege in the root file ACL for the
database whose security audit records are being loaded
- The RMU$LOAD privilege in the root file ACL for the
database into which these security audit records are being
loaded
If you do not have both of the privileges described in the
preceding list, you must have the OpenVMS SYSPRV or BYPASS
privilege.
o You can unload a table from a database structured under one
version of Oracle Rdb and load it into the same table of a
database structured under another version of Rdb. For example,
if you unload the EMPLOYEES table from a mf_personnel database
created under Oracle Rdb V6.0, you can load the generated .unl
file into an Oracle Rdb V7.0 database. Likewise, if you unload
the EMPLOYEES table from a mf_personnel database created under
Oracle Rdb V7.0, you can load the generated .unl file into
an Oracle Rdb V6.1 database. This is true even for specially
formatted binary files (created with the RMU Unload command
without the Record_Definition qualifier). The earliest version
into which you can load a .unl file from another version is
Oracle Rdb V6.0.
o The following list provides information on parallel load
operations:
- Specify no more executors (with the Executor_Count option
to the Parallel qualifier) than storage areas defined for
the table you are loading.
- You cannot use a parallel load operation to load list data
(segmented string) records or security audit records. If
you specify a parallel load operation and attempt to load
list data or security audit records, Oracle RMU returns a
warning and performs a single-executor load operation.
- Oracle Corporation recommends that you specify a shared
mode transaction type or specify the Noconstraints
qualifier and drop triggers during a parallel load
operation; otherwise, constraints and triggers defined
on the table you are loading can cause lock conflicts among
the parallel load executors.
- If you are using parallel load and hashed indexes, do not
sort the data prior to loading it. Instead, use the Place
qualifier to the RMU Load command to sort the data as it is
loaded. (The Place qualifier is useful for hashed indexes,
not sorted.)
o The following list provides information on loading security
audit journals:
- Loading security audit journals into a database other than
that which is being audited
When you load the security audit journals recorded for one
database into another database, you specify the database
that is being audited as a parameter to the Audit=Database_
File qualifier, and you specify the database into which
these security audit records should be loaded with the
root-file-spec parameter to the Oracle RMU command.
For instance, the following example loads the security
audit journal records for the mf_personnel database into
the MFP_AUDIT table of the audit_db database. Note that
SECURITY_AUDIT is a logical name that points to the actual
security audit journal file.
$ RMU/LOAD/AUDIT=DATABASE_FILE=MF_PERSONNEL AUDIT_DB -
_$ MFP_AUDIT SECURITY_AUDIT
When you issue the preceding RMU Load command, the audit_
db database must exist. However, the RMU Load command
creates the MFP_AUDIT table in the audit_db database
and appropriately defines the columns for the MFP_AUDIT
database.
In other words, the following SQL statement satisfies the
minimum requirements for the audit_db database to be used
correctly by the preceding RMU Load command:
SQL> CREATE DATABASE FILENAME audit_db.rdb;
Note that there is no field in the audit record loaded by
Oracle RMU to indicate the source database for the records.
Therefore, it is not wise to mix auditing records from
different databases in the same table. Instead, auditing
information for different databases should be loaded into
separate tables.
- Security audit journal file name
The name of the security audit journal file depends on the
version of the operating system software you are running
and on the hardware platform, as follows:
* SYS$MANAGER:SECURITY.AUDIT$JOURNAL for OpenVMS Alpha
V6.1 and later and OpenVMS VAX V6.0 and later
* SYS$MANAGER:SECURITY_AUDIT.AUDIT$JOURNAL for OpenVMS
Alpha prior to V6.1 and OpenVMS VAX V5.5 and earlier.
- Loading security audit journals into the database being
audited
The Oracle Rdb table into which you load the security
audit journal records should be defined with the columns
shown in Columns in a Database Table for Storing Security
Audit Journal Records under the column marked Oracle Rdb
Column Name so that the audit journal records can be loaded
successfully into the table. If the table does not exist,
the RMU Load Audit command creates it with the columns
shown in Columns in a Database Table for Storing Security
Audit Journal Records under the column marked Oracle Rdb
Column Name. You can give the table any valid name.
- Columns in a Database Table for Storing Security Audit
Journal Records lists the column names created by the RMU
Load command with the Audit qualifier.
Table 12 Columns in a Database Table for Storing Security Audit
Journal Records
Oracle Rdb Column
Name SQL Data Type and Length
AUDIT$EVENT CHAR 16
AUDIT$SYSTEM_NAME CHAR 15
AUDIT$SYSTEM_ID CHAR 12
AUDIT$TIME_STAMP CHAR 48
AUDIT$PROCESS_ID CHAR 12
AUDIT$USER_NAME CHAR 12
AUDIT$TSN CHAR 25
AUDIT$OBJECT_NAME CHAR 255
AUDIT$OBJECT_TYPE CHAR 12
AUDIT$OPERATION CHAR 32
AUDIT$DESIRED_ CHAR 16
ACCESS
AUDIT$SUB_STATUS CHAR 32
AUDIT$FINAL_ CHAR 32
STATUS
AUDIT$RDB_PRIV CHAR 16
AUDIT$VMS_PRIV CHAR 16
AUDIT$GRANT_IDENT CHAR 192
AUDIT$NEW_ACE CHAR 192
AUDIT$OLD_ACE CHAR 192
AUDIT$RMU_COMMAND CHAR 512
o Dates stored in ASCII text format can be converted to the VMS
DATE data type format by the RMU Load command. See Example
7 in the Examples help entry under this command, which
demonstrates this conversion.
o To preserve the NULL indicator in a load or unload operation,
specify the Null option when you use the Record_Definition
qualifier. Using the Record_Definition qualifier without the
Null option causes the RMU Load command to replace all NULL
values with zeros. This can cause unexpected results with
computed-by columns.
o When the RMU Load command is issued for a closed database, the
command executes without other users being able to attach to
the database.
o The RMU Load command recognizes character set information.
When you load a table, the RMU Load command recognizes that
the correct size of a column is based on its character set.
For example, the RMU Load command recognizes that a column
defined as CHAR (10) CHARACTER SET KANJI occupies 20 octets.
o By default, the RMU Load command changes any table or column
names that you specify to uppercase. To preserve lowercase
characters, use delimited identifiers; that is, enclose the
names in quotation marks ("").
o If your database uses a character set other than the DEC
Multinational character set (MCS) for table and domain names,
or if you edit a record definition file to use names from such
a character set, the RMU Load command could fail and return
the error shown in the following example:
$ RMU/UNLOAD/RECORD_DEFINITION=FILE=STRINGS MIA -
"TAB_°¡°¢abcd°§ABCD°©°ª" -
STRINGS.UNL
%RMU-I-DATRECUNL, 4 data records unloaded
$ RMU LOAD/RECORD_DEFINITION=FILE=STRINGS MIA -
"TAB_°¡°¢abcd°§ABCD°©°ª" -
STRINGS.UNL
DEFINE FIELD DEC_MCS_CHAR DATATYPE IS TEXT SIZE IS 20.
DEFINE FIELD KANJI_CHAR DATATYPE IS TEXT SIZE IS 10 CHARACTERS -
CHARACTER SET IS KANJI.
DEFINE FIELD HANZI_CHAR DATATYPE IS TEXT SIZE IS 10 CHARACTERS -
CHARACTER SET IS HANZI.
DEFINE FIELD HANYU_CHAR DATATYPE IS TEXT SIZE IS 10 CHARACTERS -
CHARACTER SET IS HANYU.
.
.
.
DEFINE RECORD TAB_°¡°¢abcd°§ABCD°©°ª.
%RMU-F-RECDEFSYN, Syntax error in record definition file
DEFINE RECORD TAB_''°¡°¢ABCD°§ABCD°©°ª.
When this problem occurs, edit the record definition file and
modify the names so that they can be represented with the MCS
character set.
o Oracle RMU does not support the multischema naming convention
and returns an error if you specify one. For example:
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME) -
_$ /RECORD_DEFINITION=(FILE=TEXT_NAMES,EXCEPTION_FILE=FILE.UNL) -
_$ corporate_data ADMINISTRATION.PERSONNEL.EMPLOYEES EMP.UNL
%RDB-E-BAD_DPB_CONTENT, invalid database parameters in the database
parameter block (DPB)
%RMU-I-DATRECSTO, 0 data records stored
%RMU-I-DATRECREJ, 0 data records rejected.
When using a multischema database, you must specify
the SQL stored name for the database object. For
example, to find the stored name that corresponds to the
ADMINISTRATION.PERSONNEL.EMPLOYEES table in the corporate_
data database, issue an SQL SHOW TABLE command.
SQL> SHOW TABLE ADMINISTRATION.PERSONNEL.EMPLOYEES
Information for table ADMINISTRATION.PERSONNEL.EMPLOYEES
Stored name is EMPLOYEES
.
.
.
Then, to load the table, issue the following RMU Load command:
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME) -
_$ /RECORD_DEFINITION=(FILE=TEXT_NAMES,EXCEPTION_FILE=FILE.UNL) -
_$ CORPORATE_DATA EMPLOYEES MY_DATA.UNL
%RMU-I-DATRECSTO, 3 data records stored
%RMU-I-DATRECREJ, 0 data records rejected.
The Fields qualifier can be used with indirect file
references. When you use an indirect file reference in the
field list, the referenced file is written to SYS$OUTPUT if
the DCL SET VERIFY comand has been used. See the Indirect-
Command-Files help entry for more information.
o The Transaction_Type=Batch_Update qualifier cannot be used
with multiple executors (Executor_Count greater than 1).
o The RMU Load procedure supports the loading of tables that
reference system domains.
o If you use a synonym to represent a table or a view, the
RMU Load command translates the synonym to the base object
and processes the data as though the base table or view had
been named. This implies that the unload interchange files
(.UNL) or record definition files (.RRD) that contain the
table metadata will name the base table or view and not use
the synonym name. If the metadata is used against a different
database, you may need to use the Match_Name qualifier to
override this name during the RMU load process.
18.1.6 – Examples
Example 1
This command loads the data from the RMS file, names.unl, into
the newly created RETIREES table of the mf_personnel database.
The record structure of RETIREES is in the file names.rrd. The
names.unl and names.rrd files were created by a previous RMU
Unload command. The unload operation unloaded data from a view
derived from a subset of columns in the EMPLOYEES table.
$ RMU/LOAD/RECORD_DEFINITION=FILE=NAMES.RRD -
_$ MF_PERSONNEL RETIREES NAMES.UNL
Example 2
This command restarts an aborted load operation that was loading
the newly created RETIREES table of the mf_personnel database
from the names.unl file. The columns being loaded are EMPLOYEE_
ID, LAST_NAME, and FIRST_NAME. The original load operation
had committed 25 records. Beginning with the 26th record, the
restarted load operation commits the transaction at every record
until it reaches the original point of failure.
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME) -
_$ /COMMIT_EVERY=1/SKIP=25 MF_PERSONNEL RETIREES NAMES.UNL
Example 3
This example loads a new table, PENSIONS, into the mf_personnel
database by using record definitions located in the data
dictionary.
This example assumes that you have first defined a temporary
view, TEMP_PENSIONS, combining appropriate columns of the
EMPLOYEES and SALARY_HISTORY tables. You must also create a
permanent table, PENSIONS, into which you will load the data.
Unload the TEMP_PENSIONS view by using the RMU Unload command
with the Record_Definition=File=name qualifier to create both
an .rrd file containing the column definitions and a data.unl
file containing the data from the TEMP_PENSIONS view. Load the
new record definitions from the pensions.rrd file into the data
dictionary by using the @ command at the CDO prompt. Then you
can load the data into the PENSIONS table of the mf_personnel
database by using the RMU Load command.
$ RMU/UNLOAD/RECORD_DEFINITION=FILE=PENSIONS.RRD MF_PERSONNEL -
_$ TEMP_PENSIONS DATA.UNL
$ DICTIONARY OPERATOR
Welcome to CDO V7.0
The CDD/Repository V7.0 User Interface
Type HELP for help
CDO> @PENSIONS.RRD
CDO> EXIT
$ RMU/LOAD/RECORD_DEFINITION=PATH=PENSIONS MF_PERSONNEL PENSIONS -
_$ DATA.UNL
Example 4
The following command loads the audit records for the mf_
personnel database from the security audit journal file into
the AUDIT_TABLE table in the mf_personnel database. Note that if
the AUDIT_TABLE table does not exist, the RMU Load command with
the Audit qualifier creates it with the columns shown in Columns
in a Database Table for Storing Security Audit Journal Records.
$ RMU/LOAD/AUDIT MF_PERSONNEL AUDIT_TABLE -
_$ SYS$MANAGER:SECURITY.AUDIT$JOURNAL
%RMU-I-DATRECREAD, 12858 data records read from input file.
%RMU-I-DATRECSTO, 27 data records stored.
Example 5
The following command loads the audit records for the mf_
personnel database from the security audit journal file into
the AUDIT_TABLE table into the audit_db database. Note that the
AUDIT_TABLE table is not created when the database is created.
In this case, the RMU Load command with the Audit=Database_
File qualifier creates it with the columns shown in Columns in
a Database Table for Storing Security Audit Journal Records.
$ RMU/LOAD/AUDIT=DATABASE_FILE=MF_PERSONNEL AUDIT_DB AUDIT_TABLE -
_$ SYS$MANAGER:SECURITY.AUDIT$JOURNAL
Example 6
This example loads a new table, COLLEGES, into the mf_personnel
database by using record definitions located in the data
dictionary. A commit operation occurs after every record is
stored. The Log_Commits qualifier prints a message after each
commit operation.
$ RMU/LOAD/RECORD_DEFINITION=FILE=COLLEGES.RRD /COMMIT_EVERY=1 -
_$ /LOG_COMMIT MF_PERSONNEL COLLEGES RMU.UNL
%RMU-I-DATRECSTO, 1 data records stored
%RMU-I-DATRECSTO, 2 data records stored
%RMU-I-DATRECSTO, 3 data records stored
%RMU-I-DATRECSTO, 4 data records stored
%RMU-I-DATRECSTO, 4 data records stored
$
Example 7
The following example shows how a date stored in the .unl file as
16-character collatable text can be converted to VMS DATE format
when loaded into the database by using the RMU Load command.
(The form of the .unl date is yyyymmddhhmmsscc, whereas the form
of the VMS DATE is dd-mmm-yyyy:hh:mm:ss.cc. In both cases, y is
the year, m is the month, d is the day, h is the hour, m is the
minute, s is the second, and c is hundredths of a second. However
in the .unl format, the month is expressed as an integer, whereas
in the VMS DATE format the month is expressed as a 3-character
string.)
The example assumes that the default SYS$LANGUAGE is ENGLISH.
SQL> --
SQL> -- Show the definition of the TEST table, in which the
SQL> -- COL1 column is the VMS DATE data type:
SQL> --
SQL> SHOW TABLE DATETEST;
Columns for table DATETEST:
Column Name Data Type Domain
----------- --------- ------
COL1 DATE VMS
.
.
.
$ !
$ ! Show the .unl file that will be loaded into the TEST table:
$ !
$ TYPE TEST.UNL
$ !
1991060712351212
$ !
$ ! Note that the .rrd file shows a data type of TEXT of 16
$ ! characters. These 16 characters are the number of characters
$ ! specified for the date in the test.unl file:
$ !
$ TYPE TEST.RRD
DEFINE FIELD COL1 DATATYPE IS text size is 16.
DEFINE RECORD TEST.
COL1 .
END TEST RECORD.
$ !
$ ! Load the data in test.unl into the DATETEST table:
$ !
$ RMU/LOAD/RMS=FILE=TEST.RRD TEST.RDB DATETEST TEST.UNL
%RMU-I-DATRECREAD, 1 data records read from input file.
%RMU-I-DATRECSTO, 1 data records stored.
$ !
$ SQL
SQL> ATTACH 'FILENAME TEST';
SQL> SELECT * FROM DATETEST;
COL1
7-JUN-1991 12:35:12.12
1 row selected
Example 8
The following example shows how a date stored in the .unl file
as 22-character collatable text can be converted to TIMESTAMP
format when loaded into the database by using the RMU Load
command. The correct format for the .unl TIMESTAMP value is yyyy-
mm-dd:hh:mm:ss.cc, where y,m,d,h,m,s,and c represent the same
elements of the date and time format as described in Example 7.
This example also shows the use of an exception file to trap data
that cannot be stored.
$ ! Create a column in the mf_personnel database with a
$ ! TIMESTAMP datatype:
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL.RDB';
SQL> CREATE TABLE NEWTABLE (COL1 TIMESTAMP);
SQL> SHOW TABLE (COLUMN) NEWTABLE;
Information for table NEWTABLE
Columns for table NEWTABLE:
Column Name Data Type Domain
----------- --------- ------
COL1 TIMESTAMP(2)
SQL> COMMIT;
SQL> EXIT
$ !
$ ! Create a .unl file with the data you want to load. Note that
$ ! the second value is a valid TIMESTAMP specification, the first
$ ! value is not.
$ !
$ CREATE TEST.UNL
06-14-1991:12:14:14.14
1991-06-14:12:14:14.14
$ !
$ ! Create an .rrd file that defines the TIMESTAMP field
$ ! as a TEXT field:
$ !
$ CREATE TEST.RRD
DEFINE FIELD COL1 DATATYPE IS TEXT SIZE 22.
DEFINE RECORD NEWTABLE.
COL1.
END NEWTABLE RECORD.
$ !
$ ! Attempt to load the data in the .unl file. Oracle RMU returns an
$ ! error on the first data record because the date was incorrectly
$ ! specified. The first record is written to the exception file,
$ ! BAD.DAT.
$ !
$ RMU/LOAD/RMS=(FILE=TEST.RRD,EXCEPT=BAD.DAT) MF_PERSONNEL.RDB -
_$ NEWTABLE TEST.UNL
%RMU-I-LOADERR, Error loading row 1.
%RDB-E-CONVERT_ERROR, invalid or unsupported data conversion
-COSI-F-IVTIME, invalid date or time
%RMU-I-DATRECREAD, 2 data records read from input file.
%RMU-I-DATRECSTO, 1 data records stored.
%RMU-I-DATRECREJ, 1 data records rejected.
$ !
$ ! Type BAD.DAT to view the incorrect data record
$ !
$ TYPE BAD.DAT
06-14-1991:12:14:14.14
$ !
$ ! Fetch the data record that stored successfully.
$ !
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL.RDB';
SQL> SELECT * FROM NEWTABLE;
COL1
1991-06-14:12:14:14.14
1 rows selected
Example 9
Using the RMU Load command, you can load a table in a database by
placing the fields in a different order in the database than they
were in the input file.
The jobs.unl file contains the following:
000001000000000190001Rdb Demonstrator DEMO
The jobs.rrd file contains the following:
DEFINE FIELD J_CODE DATATYPE IS TEXT SIZE IS 4.
DEFINE FIELD WAGE_CL DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD J_TITLE DATATYPE IS TEXT SIZE IS 20.
DEFINE FIELD MIN_SAL DATATYPE IS TEXT SIZE 10.
DEFINE FIELD MAX_SAL DATATYPE IS TEXT SIZE 10.
DEFINE RECORD JOBS.
MIN_SAL.
MAX_SAL.
WAGE_CL.
J_TITLE.
J_CODE.
END JOBS RECORD.
The JOBS table has the following structure:
Columns for table JOBS:
Column Name Data Type Domain
----------- --------- ------
JOB_CODE CHAR(4) JOB_CODE_DOM
WAGE_CLASS CHAR(1) WAGE_CLASS_DOM
JOB_TITLE CHAR(20) JOB_TITLE_DOM
MINIMUM_SALARY INTEGER(2) SALARY_DOM
MAXIMUM_SALARY INTEGER(2) SALARY_DOM
Notice that:
o The ordering of the columns is different for the JOBS table in
the database and in the input RMS file.
o The names in the .rrd file are also different from the names
in the database.
o The data types of the salary fields are different (Oracle Rdb
will do the conversion).
To load the RMS file correctly, you must use the following
command:
$ RMU/LOAD MF_PERSONNEL JOBS JOBS/RMS=FILE=JOBS -
_$ /FIELDS=(MINIMUM_SALARY,MAXIMUM_SALARY,WAGE_CLASS,JOB_TITLE, -
_$ JOB_CODE)
Notice that the Fields qualifier uses the names of the columns in
the JOBS table (not the field names in the .rrd file), but in the
order of the RMS file.
The names in the .rrd file are immaterial. The purpose of the
Fields qualifier is to load the first field in the RMS file into
the MINIMUM_SALARY column of the JOBS table, load the second
field in the RMS file into the MAXIMUM_SALARY column of the JOBS
table, and so forth.
The results:
SQL> SELECT * FROM JOBS WHERE JOB_CODE = 'DEMO';
JOB_CODE WAGE_CLASS JOB_TITLE MINIMUM_SALARY MAXIMUM_SALARY
DEMO 1 Rdb Demonstrator $10,000.00 $19,000.00
Example 10
The following example shows the sequence of steps used to sort
a file into placement order by using the Place qualifier and the
Place_Only option and then to load the file by using the Commit_
Every qualifier:
$ RMU/LOAD/PLACE -
_$ /RECORD_DEFINITION=(FILE=NAMES.RRD,PLACE_ONLY=PLACED_NAMES) -
_$ MF_PERSONNEL EMPLOYEES UNLOADED_NAMES.UNL
$ RMU/LOAD/RECORD_DEFINITION=(FILE=NAMES.RRD) -
_$ /COMMIT_EVERY=30 MF_PERSONNEL -
_$ EMPLOYEES PLACED_NAMES.UNL
%RMU-I-DATRECREAD, 100 data records read from input file.
%RMU-I-DATRECSTO, 100 data records stored.
Example 11
The following example requests that statistics be displayed
at a regular interval of every minute. It loads the data from
the RMS file, names.unl, into the EMPLOYEES table of the mf_
personnel database. The record structure of EMPLOYEES is in the
file names.rrd. The names.rrd file was created by a previous RMU
Unload command that unloaded data from a subset of columns in the
EMPLOYEES table.
$ RMU/LOAD/STATISTICS=(INTERVAL=60) -
_$ /RECORD_DEFINITION=(FILE=NAMES) -
_$ /FIELDS=(EMPLOYEE_ID, LAST_NAME) -
_$ MF_PERSONNEL EMPLOYEES NAMES.UNL
Example 12
The following example uses the Exception_File option to the
Record_Definition qualifier to tell Oracle RMU the name of
the file to hold the exception records. Oracle RMU returns
informational messages to alert you to any data records rejected.
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME) -
_$ /RECORD_DEFINITION=(FILE=TEXT_NAMES,EXCEPTION_FILE=FILE.UNL) -
_$ MF_PERSONNEL EMPLOYEES NAMES.UNL
%RMU-I-LOADERR, Error loading row 1.
%RDB-E-NO_DUP, index field value already exists; duplicates not
allowed for EMPLOYEES_HASH
%RMU-I-LOADERR, Error loading row 17.
%RDB-E-NO_DUP, index field value already exists; duplicates not
allowed for EMPLOYEES_HASH
%RMU-I-LOADERR, Error loading row 33.
%RDB-E-NO_DUP, index field value already exists; duplicates not
allowed for EMPLOYEES_HASH
%RMU-I-LOADERR, Error loading row 155.
%RDB-E-NO_DUP, index field value already exists; duplicates not
allowed for EMPLOYEES_HASH
%RMU-I-DATRECREAD, 200 data records read from input file.
%RMU-I-DATRECSTO, 196 data records stored.
%RMU-I-DATRECREJ, 4 data records rejected.
Example 13
The following is an example of the format in which you can
provide input data to the RMU Load command when you use the
Format=Delimited_Text option with the Record_Definition
qualifier. This is followed by the RMU Load command you use to
load this data.
"99997","ABUSHAKRA","CAROLINE","S","5 CIRCLE STREET","BOX 506",
"CHELMSFORD", "MA", "02184", "1960061400000000"#
"99996","BRADFORD","LEO","M","4 PLACE STREET","BOX 555",
"NASHUA","NH", "03060", "1949051800000000"#
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
_$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2, -
_$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
_$ /RECORD_DEFINITION=(FILE= NAMES.RRD, -
_$ FORMAT=DELIMITED_TEXT, -
_$ TERMINATOR="#" ) -
_$ MF_PERSONNEL EMPLOYEES NAMES.UNL
%RMU-I-DATRECREAD, 2 data records read from input file.
%RMU-I-DATRECSTO, 2 data records stored.
Example 14
The following is an example of the format in which you must
provide input data to the RMU Load command when you specify the
Format=Text option with the Record_Definition qualifier. This is
followed by the RMU Load command you use to load this data.
09166Watts Leora F
09190Margolis David M
09187McDonald Lois F
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, SEX) -
_$ /RECORD_DEFINITION=(FILE=TEXT_NAMES.RRD, FORMAT=TEXT) -
_$ MF_PERSONNEL EMPLOYEES NAMES.UNL
%RMU-I-DATRECREAD, 3 data records read from input file.
%RMU-I-DATRECSTO, 3 data records stored.
18.1.7 – Examples (Cont.)
Example 15
The following example assumes you want to load a data file
into the JOBS table that contains more fields than the table
definition in the mf_personnel database. The example first
attempts to do this by just excluding the extra field from the
list associated with the Fields qualifier. However, this causes
an error to be returned. The example then uses the FILLER keyword
in the .rrd file to tell Oracle RMU not to attempt to load the
additional field. The command executes successfully.
The table definition for the JOBS table is as follows:
Columns for table JOBS:
Column Name Data Type Domain
----------- --------- ------
JOB_CODE CHAR(4) JOB_CODE_DOM
Primary Key constraint JOBS_PRIMARY_JOB_CODE
WAGE_CLASS CHAR(1) WAGE_CLASS_DOM
JOB_TITLE CHAR(20) JOB_TITLE_DOM
MINIMUM_SALARY INTEGER(2) SALARY_DOM
MAXIMUM_SALARY INTEGER(2) SALARY_DOM
The .rrd file for the data you want to load appears as follows
(note that there is no corresponding field to JOB_STATUS in the
mf_personnel database definition for the JOBS table):
DEFINE FIELD JOB_CODE DATATYPE IS TEXT SIZE IS 4.
DEFINE FIELD WAGE_CLASS DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD JOB_TITLE DATATYPE IS TEXT SIZE IS 20.
DEFINE FIELD MINIMUM_SALARY DATATYPE IS TEXT SIZE IS 13.
DEFINE FIELD MAXIMUM_SALARY DATATYPE IS TEXT SIZE IS 13.
DEFINE FIELD JOB_STATUS DATATYPE IS TEXT SIZE IS 4.
DEFINE RECORD JOBS.
JOB_CODE .
WAGE_CLASS .
JOB_TITLE .
MINIMUM_SALARY .
MAXIMUM_SALARY .
JOB_STATUS .
END JOBS RECORD.
The data file you want to load, jobs.unl, appears as follows:
DBAD4Corp Db Administratr55000.00 95000.00 Old
You attempt to load the file in the mf_personnel database
by listing only the fields in the RMU Load command that have
corresponding fields defined in the database:
$ RMU/LOAD MF_PERSONNEL/RMS=(FILE=JOBS.RRD, FORMAT=TEXT) -
_$ /FIELDS=(JOB_CODE, WAGE_CLASS, JOB_TITLE, MINIMUM_SALARY, -
_$ MAXIMUM_SALARY) JOBS JOBS.UNL
%RMU-F-FLDMUSMAT, Specified fields must match in number and datatype
with the unloaded data
%RMU-I-DATRECSTO, 0 data records stored
The workaround for the problem of a mismatch between your data
and .rrd file, and database definition for a table is to use the
FILLER keyword in your .rrd file, as follows:
DEFINE FIELD JOB_CODE DATATYPE IS TEXT SIZE IS 4.
DEFINE FIELD WAGE_CLASS DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD JOB_TITLE DATATYPE IS TEXT SIZE IS 20.
DEFINE FIELD MINIMUM_SALARY DATATYPE IS TEXT SIZE IS 13.
DEFINE FIELD MAXIMUM_SALARY DATATYPE IS TEXT SIZE IS 13.
DEFINE FIELD JOB_STATUS DATATYPE IS TEXT SIZE IS 4 FILLER. <------
DEFINE RECORD JOBS.
JOB_CODE .
WAGE_CLASS .
JOB_TITLE .
MINIMUM_SALARY .
MAXIMUM_SALARY .
JOB_STATUS .
END JOBS RECORD.
Now that the .rrd file has been modified, attempt to load the
record again:
$ RMU/LOAD MF_PERSONNEL/RMS=(FILE=JOBS.RRD, FORMAT=TEXT) -
_$ /FIELDS=(JOB_CODE, WAGE_CLASS, JOB_TITLE, MINIMUM_SALARY, -
_$ MAXIMUM_SALARY) JOBS JOBS.UNL
%RMU-I-DATRECSTO, 1 data records stored.
Example 16
The following example demonstrates the use of the Null="*" option
of the Record_Definition qualifier to signal to Oracle RMU that
any data that appears as an unquoted asterisk in the .unl file
should have the corresponding column in the database be flagged
as NULL.
The example shows the contents of the .unl file, followed by the
RMU Load command used to load this .unl file, and then the output
from an SQL statement to display the data loaded.
"98888","ABUSHAKRA","CAROLINE",*,"5 CIRCLE STREET","BOX 506",
"CHELMSFORD", "MA", "02184", "1960061400000000"#
"98889","BRADFORD","LEO",*,"4 PLACE STREET","BOX 555", "NASHUA","NH",
"03060", "1949051800000000"#
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
_$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2, -
_$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
_$ /RECORD_DEFINITION=(FILE= EMPLOYEES.RRD, -
_$ FORMAT=DELIMITED_TEXT, -
_$ TERMINATOR="#", -
-$ NULL="*" ) -
_$ MF_PERSONNEL EMPLOYEES EMPLOYEES.UNL
%RMU-I-DATRECREAD, 2 data records read from input file.
%RMU-I-DATRECSTO, 2 data records stored.
SQL> ATTACH 'FILENAME MF_PERSONNEL.RDB';
SQL> SELECT * FROM EMPLOYEES WHERE EMPLOYEE_ID > '98000'
cont> AND MIDDLE_INITIAL IS NULL;
EMPLOYEE_ID LAST_NAME FIRST_NAME MIDDLE_INITIAL
ADDRESS_DATA_1 ADDRESS_DATA_2 CITY
STATE POSTAL_CODE SEX BIRTHDAY STATUS_CODE
98888 ABUSHAKRA CAROLINE NULL
5 CIRCLE STREET BOX 506 CHELMSFORD
MA 02184 ? 14-Jun-1960 N
98889 BRADFORD LEO NULL
4 PLACE STREET BOX 555 NASHUA
NH 03060 ? 18-May-1949 N
2 rows selected
Example 17
The following example demonstrates the use of the Null="" option
of the Record_Definition qualifier to signal to Oracle RMU that
any data that is an empty string in the .unl file (as represented
by two commas with no space separating them) should have the
corresponding column in the database be flagged as NULL.
The example shows the contents of the .unl file, followed by the
RMU Load command used to load this .unl file, and then the output
from an SQL statement to display the data loaded.
"90021","ABUSHAKRA","CAROLINE","A","5 CIRCLE STREET",,
"CHELMSFORD", "MA", "02184", "1960061400000000"#
"90015","BRADFORD","LEO","B","4 PLACE STREET",, "NASHUA","NH",
"03030", "1949051800000000"#
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
_$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2, -
_$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
_$ /RECORD_DEFINITION=(FILE= EMPLOYEES.RRD, -
_$ FORMAT=DELIMITED_TEXT, -
_$ TERMINATOR="#", -
_$ NULL="") -
_$ MF_PERSONNEL EMPLOYEES EMPLOYEES.UNL
%RMU-I-DATRECREAD, 2 data records read from input file.
%RMU-I-DATRECSTO, 2 data records stored.
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL.RDB';
SQL> SELECT * FROM EMPLOYEES WHERE ADDRESS_DATA_2 IS NULL;
EMPLOYEE_ID LAST_NAME FIRST_NAME MIDDLE_INITIAL
ADDRESS_DATA_1 ADDRESS_DATA_2 CITY
STATE POSTAL_CODE SEX BIRTHDAY STATUS_CODE
90021 ABUSHAKRA CAROLINE A
5 CIRCLE STREET NULL CHELMSFORD
MA 02184 ? 14-Jun-1960 N
90015 BRADFORD LEO B
4 PLACE STREET NULL NASHUA
NH 03030 ? 18-May-1949 N
2 rows selected
Example 18
The following example is the same as Example 17 except it shows
the use of the default value for the Null option of the Record_
Definition qualifier to signal to Oracle RMU that any data that
is an empty string in the .unl file (as represented by two commas
with no space separating them) should have the corresponding
column in the database be flagged as NULL.
The example shows the contents of the .unl file, followed by the
RMU Load command used to load this .unl file, and then the output
from an SQL statement to display the data loaded.
"90022","ABUSHAKRA","CAROLINE","A","5 CIRCLE STREET",,
"CHELMSFORD", "MA", "02184", "1960061400000000"#
"90014","BRADFORD","LEO","B","4 PLACE STREET",, "NASHUA","NH",
"03030", "1949051800000000"#
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
_$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2, -
_$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
_$ /RECORD_DEFINITION=(FILE= EMPLOYEES.RRD, -
_$ FORMAT=DELIMITED_TEXT, -
_$ TERMINATOR="#", -
_$ NULL) -
_$ MF_PERSONNEL EMPLOYEES EMPLOYEES.UNL
%RMU-I-DATRECREAD, 2 data records read from input file.
%RMU-I-DATRECSTO, 2 data records stored.
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL.RDB';
SQL> SELECT * FROM EMPLOYEES WHERE EMPLOYEE_ID = '90022' OR
cont> EMPLOYEE_ID ='90014' AND ADDRESS_DATA_2 IS NULL;
EMPLOYEE_ID LAST_NAME FIRST_NAME MIDDLE_INITIAL
ADDRESS_DATA_1 ADDRESS_DATA_2 CITY
STATE POSTAL_CODE SEX BIRTHDAY STATUS_CODE
90014 BRADFORD LEO B
4 PLACE STREET NULL NASHUA
NH 03030 ? 18-May-1949 N
90022 ABUSHAKRA CAROLINE A
5 CIRCLE STREET NULL CHELMSFORD
MA 02184 ? 14-Jun-1960 N
2 rows selected
Example 19
The following example demonstrates the use of the Null option of
the Record_Definition qualifier to signal to Oracle RMU that any
data that is an empty string in the .unl file (as represented
by two commas with no space separating them) should have the
corresponding column in the database be flagged as NULL. In
addition, any column for which there is only data for the first
column or columns has the remaining columns set to NULL.
The example shows the contents of the .unl file, followed by the
RMU Load command used to load this .unl file, and then the output
from an SQL statement to display the data loaded.
"90026","ABUSHAKRA","CAROLINE","A","5 CIRCLE STREET","BOX 783",
"CHELMSFORD","MA", "02184", "1960061400000000"
"90011","BRADFORD","LEO",,,, "NASHUA","NH","03030","1949051800000000"
"90010"
"90009",,,,,,,,,"1966061600000000"
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
_$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2, -
_$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
_$ /RECORD_DEFINITION=(FILE= EMPLOYEES.RRD, -
_$ FORMAT=DELIMITED_TEXT, -
_$ NULL) -
_$ MF_PERSONNEL EMPLOYEES EMPLOYEES.UNL
%RMU-I-DATRECREAD, 5 data records read from input file.
%RMU-I-DATRECSTO, 5 data records stored.
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL.RDB';
SQL> SELECT * FROM EMPLOYEES WHERE EMPLOYEE_ID ='90026' OR
cont> EMPLOYEE_ID BETWEEN '90009' AND '90011';
EMPLOYEE_ID LAST_NAME FIRST_NAME MIDDLE_INITIAL
ADDRESS_DATA_1 ADDRESS_DATA_2 CITY
STATE POSTAL_CODE SEX BIRTHDAY STATUS_CODE
90009 NULL NULL NULL
NULL NULL NULL
NULL NULL ? 16-Jun-1966 N
90010 NULL NULL NULL
NULL NULL NULL
NULL NULL ? NULL N
90011 BRADFORD LEO NULL
NULL NULL NASHUA
NH 03030 ? 18-May-1949 N
90026 ABUSHAKRA CAROLINE A
5 CIRCLE STREET BOX 783 CHELMSFORD
MA NULL ? 14-Jun-1960 N
4 rows selected
Example 20
The following example demonstrates a parallel load operation.
In this example, three executors are specified because there are
three storage areas in the JOB_HISTORY table of the mf_personnel
database. The Defer_Index_Updates qualifier is used because there
are no constraints or triggers defined on the JOB_HISTORY table,
and it is known that no other database activity will occur when
this command is executed.
In addition, a plan file is generated to capture the
specification of this load operation. See the next example for
a description of the plan file.
Note that the pid provided in the output from this command is the
process ID.
$ RMU/LOAD/PARALLEL=(EXEC=3)/DEFER_INDEX_UPDATES mf_personnel.rdb -
_$ /RECORD_DEFINITION=(FILE=JOB_HIST,FORMAT=DELIMITED_TEXT, -
_$ EXCEPTION_FILE=DISK1:[ERRORS]JOB_HIST.EXC) -
_$ /STATISTICS=(INTERVAL=30)/LIST_PLAN=JOB_HISTORY.PLAN -
_$ JOB_HISTORY JOB_HIST.UNL
%RMU-I-EXECUTORMAP, Executor EXECUTOR_1 (pid: 2941941B) will load
storage area EMPIDS_LOW.
%RMU-I-EXECUTORMAP, Executor EXECUTOR_2 (pid: 2941F01D) will load
storage area EMPIDS_MID.
%RMU-I-EXECUTORMAP, Executor EXECUTOR_3 (pid: 2941C81F) will load
storage area EMPIDS_OVER.
--------------------------------------------------------------------------
ELAPSED: 0 00:00:30.05 CPU: 0:00:01.64 BUFIO: 59 DIRIO: 219 FAULTS: 2670
1640 data records read from input file.
1330 records loaded before last commit.
220 records loaded in current transaction.
0 records rejected before last commit.
0 records rejected in current transaction.
26 early commits by executors.
3 executors: 0 Initializing; 0 Idle; 0 Terminated
0 Sorting; 2 Storing; 1 Committing; 0 Executing
--------------------------------------------------------------------------
.
.
.
--------------------------------------------------------------------------
ELAPSED: 0 00:02:30.12 CPU: 0:00:02.94 BUFIO: 103 DIRIO: 227 FAULTS: 267
1
8070 data records read from input file.
7800 records loaded before last commit.
210 records loaded in current transaction.
0 records rejected before last commit.
0 records rejected in current transaction.
139 early commits by executors.
3 executors: 0 Initializing; 0 Idle; 0 Terminated
0 Sorting; 1 Storing; 2 Committing; 0 Executing
---------------------------------------------------------------------------
%RMU-I-EXECSTAT0, Statistics for EXECUTOR_1:
%RMU-I-EXECSTAT1, Elapsed time: 00:02:45.84 CPU time: 12.95
%RMU-I-EXECSTAT2, Storing time: 00:00:45.99 Rows stored: 2440
%RMU-I-EXECSTAT3, Commit time: 00:01:33.17 Direct I/O: 6623
%RMU-I-EXECSTAT4, Idle time: 00:00:22.34 Early commits: 47
%RMU-I-EXECSTAT0, Statistics for EXECUTOR_2:
%RMU-I-EXECSTAT1, Elapsed time: 00:02:48.42 CPU time: 18.10
%RMU-I-EXECSTAT2, Storing time: 00:01:24.98 Rows stored: 4319
%RMU-I-EXECSTAT3, Commit time: 00:01:18.13 Direct I/O: 9621
%RMU-I-EXECSTAT4, Idle time: 00:00:01.03 Early commits: 29
%RMU-I-EXECSTAT0, Statistics for EXECUTOR_3:
%RMU-I-EXECSTAT1, Elapsed time: 00:02:46.50 CPU time: 9.78
%RMU-I-EXECSTAT2, Storing time: 00:00:11.12 Rows stored: 2293
%RMU-I-EXECSTAT3, Commit time: 00:02:26.67 Direct I/O: 3101
%RMU-I-EXECSTAT4, Idle time: 00:00:04.14 Early commits: 77
%RMU-I-EXECSTAT5, Main process idle time: 00:02:41.06
%RMU-I-DATRECREAD, 9052 data records read from input file.
%RMU-I-DATRECSTO, 9052 data records stored.
%RMU-I-DATRECREJ, 0 data records rejected.
Example 21
The following command is the same as in the previous example,
except the Noexecute qualifier is specified. Because this
qualifier is specified, the load operation is not performed.
However, the load plan file is created and verified.
$ RMU/LOAD/PARALLEL=(EXEC=3)/DEFER_INDEX_UPDATES/NOEXECUTE -
_$ mf_personnel.rdb -
_$ /RECORD_DEFINITION=(FILE=JOB_HIST,FORMAT=DELIMITED_TEXT, -
_$ EXCEPTION_FILE=DISK1:[ERRORS]JOB_HIST.EXC) -
_$ /STATISTICS=(INTERVAL=30)/LIST_PLAN=JOB_HISTORY.PLAN -
_$ JOB_HISTORY JOB_HIST.UNL
Example 22
The following display shows the contents of the plan file,
JOB_HISTORY.PLAN, created in the preceding example. The following
callouts are keyed to this display:
1 The Plan Parameters include all the parameters specified on
the RMU Load command line and all possible command qualifiers.
2 Command qualifiers that are not specified on the command line
are sometimes represented as comments in the plan file. This
allows you to edit and adjust the plan file for future use.
3 Command qualifiers that are not specified on the command line
and for which there are defaults are sometimes represented
with their default value in the plan file.
4 Command qualifiers that are explicitly specified on the
command line are represented in the plan file as specified.
5 Executor Parameters are listed for each executor involved
in the load operation. Like the command qualifiers, both the
values you specify on the command line and those that are
allowed but were not specified are included in this list of
parameters.
6 Note that the exception file extension is appended with the
executor number. When you specify such files on the command
line, Oracle RMU generates a separate file for each executor.
If desired, you could edit this plan file to place each
exception file on a different disk or directory.
! Plan created on 20-JUL-1995 by RMU/LOAD.
Plan Name = LOAD_PLAN
Plan Type = LOAD
Plan Parameters:1
Database Root File = MF_PERSONNEL.RDB;
Table Name = JOB_HISTORY
Input File = JOB_HIST.UNL
! Fields = <all> 2
Transaction_Type = PROTECTED
! Buffers = <default>
Row_Count = 50 3
! Skip = <none>
NoLog_Commits
NoCorresponding
Defer_Index_Updates
Constraints
Parallel
NoPlace
Statistics = INTERVAL = 30 4
NoTrigger_Relations
Record_Definition_File = JOB_HIST
Format = Delimited_Text
Prefix = """"
Suffix = """"
NoNull
Separator = ","
End Of Line Terminator
End Plan Parameters
Executor Parameters: 5
Executor Name = EXECUTOR_1
! Place_Only = <none>
Exception_File = DISK1:[DATABASE]JOB_HIST.EXC_1; 6
! RUJ Directory = <default>
Communication Buffers = 4
End Executor Parameters
Executor Parameters:
Executor Name = EXECUTOR_2
! Place_Only = <none>
Exception_File = DISK1:[DATABASE]JOB_HIST.EXC_2;
! RUJ Directory = <default>
Communication Buffers = 4
End Executor Parameters
Executor Parameters:
Executor Name = EXECUTOR_3
! Place_Only = <none>
Exception_File = DISK1:[DATABASE]JOB_HIST.EXC_3;
! RUJ Directory = <default>
Communication Buffers = 4
End Executor Parameters
Example 23
The following example demonstrates the structure of the record
definition file (.rrd) for an RMU Load command for several
different data types. The first part of the example displays the
table definition, the second part shows the RMU Unload command
you could use to get an appropriate .rrd file for these data
types, and the last part shows the .rrd file definitions for
these data types:
SQL> attach 'filename data_types.rdb';
SQL> show table many_types;
Information for table MANY_TYPES
Columns for table MANY_TYPES:
Column Name Data Type Domain
----------- --------- ------
F_ID TINYINT
F_CHAR_3 CHAR(3)
F_TINYINT TINYINT
F_SMALLINT SMALLINT
F_INTEGER INTEGER
F_BIGINT BIGINT
F_NTINYINT TINYINT(1)
F_NSMALLINT SMALLINT(2)
F_NINTEGER INTEGER(7)
F_NBIGINT BIGINT(5)
F_REAL REAL
F_DOUBLE_PREC DOUBLE PRECISION
F_DATE_VMS DATE VMS
F_DATE_ANSI DATE ANSI
F_VARCHAR VARCHAR(20)
F_FLOAT REAL
F_DATE DATE VMS
F_TIME TIME
F_TIMESTAMP TIMESTAMP(2)
F_INTERVAL INTERVAL
DAY (2)
$ RMU/UNLOAD DATA_TYPES.RDB/RECORD_DEF=(FILE=MANY_TYPES.RRD) -
_$ MANY_TYPES MANY_TYPES.UNL
$ TYPE MANY_TYPES.RRD
DEFINE FIELD F_ID DATATYPE IS SIGNED BYTE.
DEFINE FIELD F_CHAR_3 DATATYPE IS TEXT SIZE IS 3.
DEFINE FIELD F_TINYINT DATATYPE IS SIGNED BYTE.
DEFINE FIELD F_SMALLINT DATATYPE IS SIGNED WORD.
DEFINE FIELD F_INTEGER DATATYPE IS SIGNED LONGWORD.
DEFINE FIELD F_BIGINT DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD F_NTINYINT DATATYPE IS SIGNED BYTE SCALE -1.
DEFINE FIELD F_NSMALLINT DATATYPE IS SIGNED WORD SCALE -2.
DEFINE FIELD F_NINTEGER DATATYPE IS SIGNED LONGWORD SCALE -7.
DEFINE FIELD F_NBIGINT DATATYPE IS SIGNED QUADWORD SCALE -5.
DEFINE FIELD F_REAL DATATYPE IS F_FLOATING.
DEFINE FIELD F_DOUBLE_PREC DATATYPE IS G_FLOATING.
DEFINE FIELD F_DATE_VMS DATATYPE IS DATE.
DEFINE FIELD F_DATE_ANSI DATATYPE IS DATE ANSI.
DEFINE FIELD F_VARCHAR DATATYPE IS TEXT SIZE IS 20.
DEFINE FIELD F_FLOAT DATATYPE IS F_FLOATING.
DEFINE FIELD F_DATE DATATYPE IS DATE.
DEFINE FIELD F_TIME DATATYPE IS TIME.
DEFINE FIELD F_TIMESTAMP DATATYPE IS TIMESTAMP SCALE -2.
DEFINE FIELD F_INTERVAL DATATYPE IS INTERVAL DAY SIZE IS 2 DIGITS.
DEFINE RECORD MANY_TYPES.
F_ID .
F_CHAR_1 .
. . .
END MANY_TYPES RECORD.
Example 24
The following example shows part of a script for loading a copy
of the PERSONNEL database using the output from SQL EXPORT.
$! Export the database definition and the data
$ sql$ export database filename personnel into pers.rbr;
$
$! Create an empty database (use RMU Load to add data)
$ sql$ import database from pers.rbr filename copy_pers no data;
$
$! Now use load to add the same table
$ rmu/load copy_pers /match_name=employees employees pers.rbr
%RMU-I-DATRECREAD, 100 data records read from input file.
%RMU-I-DATRECSTO, 100 data records stored.
$
$ rmu/load copy_pers /match_name job_history pers.rbr
%RMU-I-DATRECREAD, 274 data records read from input file.
%RMU-I-DATRECSTO, 274 data records stored.
$
$ rmu/load copy_pers /match_name salary_history pers.rbr
%RMU-I-DATRECREAD, 729 data records read from input file.
%RMU-I-DATRECSTO, 729 data records stored.
$
.
.
.
$ rmu/load copy_pers /match_name work_status pers.rbr
%RMU-I-DATRECREAD, 3 data records read from input file.
%RMU-I-DATRECSTO, 3 data records stored.
Example 25
The following example shows that, by default, truncation errors
during a Load are reported.
$ rmu/load abc f2 f1
%RMU-I-LOADERR, Error loading row 1.
%RDB-E-TRUN_STORE, string truncated during assignment to a column
%RMU-I-DATRECREAD, 1 data records read from input file.
%RMU-I-DATRECSTO, 0 data records stored.
%RMU-F-FTL_LOAD, Fatal error for LOAD operation at 13-FEB-2008 15:39:44.40
$
Example 26
The following example shows the use of the /VIRTUAL_FIELDS
qualifier. The values of the INTEGER field A and the AUTOMATIC
field B are first unloaded into the AA.UNL file from the RMU_
LOAD_AUTOMATIC_4_DB database table AA using the /VIRTUAL_
FIELDS qualifier. Then the values of the INTEGER field A and
the AUTOMATIC field B in the AA.UNL file are loaded into the AA
table in the RMU_LOAD_AUTOMATIC_4_DB2 database.
$ SQL
create database
filename RMU_LOAD_AUTOMATIC_4_DB;
-- create a sequence and a table
create sequence S increment by -1;
create table AA
(a integer
,b automatic as s.nextval);
-- load 10 rows
begin
declare :i integer;
for :i in 1 to 10
do
insert into AA (a) values (:i);
end for;
end;
commit;
disconnect all;
$ exit
$ rmu/unload-
/virtual=(automatic)-
/record=(file=rr,format=delim)-
RMU_LOAD_AUTOMATIC_4_DB aa aa.unl
%RMU-I-DATRECUNL, 10 data records unloaded.
$
$
$! Load using /VIRTUAL
$ rmu/load-
/record=(file=rr,format=delim)-
/virtual-
RMU_LOAD_AUTOMATIC_4_DB2 aa aa.unl
%RMU-I-DATRECREAD, 10 data records read from input file.
%RMU-I-DATRECSTO, 10 data records stored.
$
18.2 – Plan
Executes a load plan file previously created with the RMU Load
command (or created manually by the user).
18.2.1 – Description
A load plan file is created when you execute an RMU Load
command with the List_Plan qualifier. See Load Database for
details on creating a plan file, the format of a plan file, and
understanding the informational messages returned by a Parallel
Load operation.
18.2.2 – Format
(B)0[mRMU/Load/Plan plan-file
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Execute x Execute
/List_Plan=output-file x None
x
18.2.3 – Parameters
18.2.3.1 – plan-file-spec
The file specification for the load plan file. The default file
extension is .plan.
18.2.4 – Command Qualifiers
18.2.4.1 – Execute
Execute
Noexecute
The Execute qualifier specifies that the plan file is to be
executed. The Noexecute qualifier specifies that the plan file
should not be executed, but rather that a validity check be
performed on the contents of the plan file.
The validity check determines such things as whether the
specified table is in the specified database, the .rrd file (if
specified) matches the table, and so on. The validity check does
not determine such things as whether your process and global page
quotas are sufficient.
By default, data is loaded when the RMU Load Plan command is
issued.
18.2.4.2 – List Plan
List_Plan=output-file
Specifies that Oracle RMU should generate a new plan file and
write it to the specified output file. This new plan file is
identical to the plan file you specified on the command line (the
"original" plan file) with the following exceptions:
o Any comments that appear in the original plan file will not
appear in the new plan file.
o If the number of executors specified in the original plan
file exceeds the number of storage areas that the table being
loaded contains, the new plan file will reduce the number of
executors to match the number of storage areas.
18.2.5 – Usage Notes
o To use the RMU Load Plan command for a database, you must
have the RMU$LOAD privilege in the root file access control
list (ACL) for the database or the OpenVMS SYSPRV or BYPASS
privilege. Privileges for accessing the database tables
involved are also required.
o When the load plan is executed, executors are created as
detached processes if you have the OpenVMS DETACH privilege.
If you do not have the OpenVMS DETACH privilege, executors are
created as subprocesses of your process.
18.2.6 – Examples
Example 1
The following example demonstrates the following:
1. The first Oracle RMU command creates a parallel load plan
file. The RMU Load command is not executed because the point
of issuing the command is to create the plan file, not to
load data. Notice that the created load plan has only three
executors, even though four were specified on the command
line. This is because EMPLOYEES has only three storage areas.
2. The load plan file generated by the first Oracle RMU command
is displayed.
3. The load plan file is edited to change some parameters and to
rename the executors with names that describe the storage area
each executor is responsible for loading.
4. The edited version of the load plan file is executed.
$ RMU/LOAD/PARALLEL=(EXECUTOR_COUNT=4, BUFFER_COUNT=4)/NOEXECUTE -
_$ /RECORD_DEFINITION=(FILE=EMPLOYEES.RRD, FORMAT=DELIMITED) -
_$ /LIST_PLAN=EMPLOYEES.PLAN MF_PERSONNEL.RDB EMPLOYEES EMPLOYEES.UNL
%RMU-W-TOOMANYEXECS, 4 executors were requested, but only 3 executors
will be used.
$ !
$ TYPE EMPLOYEES.PLAN
! Plan created on 20-JUL-1995 by RMU/LOAD.
Plan Name = LOAD_PLAN
Plan Type = LOAD
Plan Parameters:
Database Root File = MF_PERSONNEL.RDB
Table Name = EMPLOYEES
Input File = EMPLOYEES.UNL
! Fields = <all>
Transaction_Type = PROTECTED
! Buffers = <default>
Row_Count = 50
! Skip = <none>
NoLog_Commits
NoCorresponding
NoDefer_Index_Updates
Constraints
Parallel
NoPlace
! Statistics = <none>
NoTrigger_Relations
Record_Definition_File = EMPLOYEES.RRD
Format = Delimited_Text
Prefix = """"
Suffix = """"
NoNull
Separator = ","
End Of Line Terminator
End Plan Parameters
Executor Parameters:
Executor Name = EXECUTOR_1
! Place_Only = <none>
! Exception_File = <none>
! RUJ Directory = <default>
Communication Buffers = 4
End Executor Parameters
Executor Parameters:
Executor Name = EXECUTOR_2
! Place_Only = <none>
! Exception_File = <none>
! RUJ Directory = <default>
Communication Buffers = 4
End Executor Parameters
Executor Parameters:
Executor Name = EXECUTOR_3
! Place_Only = <none>
! Exception_File = <none>
! RUJ Directory = <default>
Communication Buffers = 4
End Executor Parameters
The following is an edited version of the plan file presented in
the previous example. The file has been edited as follows:
o Comments have been added to indicate that the file has been
edited.
o The Row_Count value has been changed from 50 to 60.
o Each executor name has been changed to reflect the storage
area the executor is responsible for loading.
This makes it easier to determine the storage area from which
a record was rejected if an error occurs during loading. In
addition, it makes it easier to determine, when records are
rejected, which executor was attempting to load it and which
Rdb error corresponds to a particular executor.
o The directory and file name for each exception file has been
changed and the comment character preceding "Exception_File"
has been removed.
o Directories for the .ruj files have been added and the comment
character preceding "RUJ Directory" has been removed.
! Plan created on 20-JUL-1995 by RMU/LOAD.
! Edited on 21-JUL-1995 by John Stuart
Plan Name = LOAD_PLAN
Plan Type = LOAD
Plan Parameters:
Database Root File = MF_PERSONNEL.RDB
Table Name = EMPLOYEES
Input File = EMPLOYEES.UNL
! Fields = <all>
Transaction_Type = PROTECTED
! Buffers = <default>
Row_Count = 20
! Skip = <none>
NoLog_Commits
NoCorresponding
NoDefer_Index_Updates
Constraints
Parallel
NoPlace
! Statistics = <none>
NoTrigger_Relations
Record_Definition_File = EMPLOYEES.RRD
Format = Delimited_Text
Prefix = """"
Suffix = """"
NoNull
Separator = ","
End Of Line Terminator
End Plan Parameters
Executor Parameters:
Executor Name = EMPIDS_LOW_EXEC
! Place_Only = <none>
Exception_File = DISK1:[EXCEPTIONS]EMPIDS_LOW.EXC
RUJ Directory = DISK1:[RUJ]EMPIDS_LOW.RUJ
Communication Buffers = 4
End Executor Parameters
Executor Parameters:
Executor Name = EMPIDS_MID_EXEC
! Place_Only = <none>
Exception_File = DISK2:[EXCEPTIONS]EMPIDS_MID.EXC
RUJ Directory = DISK2:[RUJ]EMPIDS_MID.RUJ
Communication Buffers = 4
End Executor Parameters
Executor Parameters:
Executor Name = EMPIDS_OVER_EXEC
! Place_Only = <none>
Exception_File = DISK3:[EXCEPTIONS]EMPIDS_LOW.EXC
RUJ Directory = DISK3:[RUJ]EMPIDS_LOW.RUJ
Communication Buffers = 4
End Executor Parameters
$ !
$ ! Execute the plan file.
$ ! Each executor is assigned the storage area or areas and
$ ! the pid (process ID) for each executor is displayed.
$ ! Notice that Oracle RMU notifies you if an error occurs when
$ ! an executor attempts to load a row, and then lists the Rdb error
$ ! message. Sometimes you receive two or more Oracle RMU
$ ! messages in a row and then the associated Oracle Rdb message. You
$ ! can match the Oracle RMU message to the Oracle Rdb message by
$ ! matching the executor name prefixes to the messages.
$ !
$ RMU/LOAD/PLAN EMPLOYEES.PLAN
%RMU-I-EXECUTORMAP, Executor EMPIDS_LOW_EXEC (pid: 3140A4CC) will
load storage area EMPIDS_LOW.
%RMU-I-EXECUTORMAP, Executor EMPIDS_MID_EXEC (pid: 314086CD) will
load storage area EMPIDS_MID.
%RMU-I-EXECUTORMAP, Executor EMPIDS_OVER_EXEC (pid: 314098CE) will
load storage area EMPIDS_OVER.
EMPIDS_MID_EXEC: %RMU-I-LOADERR, Error loading row 4.
EMPIDS_LOW_EXEC: %RMU-I-LOADERR, Error loading row 1.
EMPIDS_MID_EXEC: %RDB-E-NO_DUP, index field value already exists;
duplicates not allowed for EMPLOYEES_HASH
EMPIDS_LOW_EXEC: %RDB-E-NO_DUP, index field value already exists;
duplicates not allowed for EMPLOYEES_HASH
%RMU-I-EXECSTAT0, Statistics for EMPIDS_LOW_EXEC:
%RMU-I-EXECSTAT1, Elapsed time: 00:00:51.69 CPU time: 4.51
%RMU-I-EXECSTAT2, Storing time: 00:00:32.33 Rows stored: 161
%RMU-I-EXECSTAT3, Commit time: 00:00:00.66 Direct I/O: 932
%RMU-I-EXECSTAT4, Idle time: 00:01:44.99 Early commits: 1
%RMU-I-EXECSTAT0, Statistics for EMPIDS_MID_EXEC:
%RMU-I-EXECSTAT1, Elapsed time: 00:01:06.47 CPU time: 4.32
%RMU-I-EXECSTAT2, Storing time: 00:00:38.80 Rows stored: 142
%RMU-I-EXECSTAT3, Commit time: 00:00:01.04 Direct I/O: 953
%RMU-I-EXECSTAT4, Idle time: 00:00:18.18 Early commits: 2
%RMU-I-EXECSTAT0, Statistics for EMPIDS_OVER_EXEC:
%RMU-I-EXECSTAT1, Elapsed time: 00:01:04.98 CPU time: 3.22
%RMU-I-EXECSTAT2, Storing time: 00:00:30.89 Rows stored: 100
%RMU-I-EXECSTAT3, Commit time: 00:00:00.90 Direct I/O: 510
%RMU-I-EXECSTAT4, Idle time: 00:00:26.65 Early commits: 1
%RMU-I-EXECSTAT5, Main process idle time: 00:00:58.11
%RMU-I-DATRECREAD, 495 data records read from input file.
%RMU-I-DATRECSTO, 403 data records stored.
%RMU-I-DATRECREJ, 92 data records rejected.
19 – Monitor
The Oracle RMU Monitor controls the Oracle Rdb Monitor Process.
An Oracle Rdb Monitor Process must be running on each system on
which you use Oracle Rdb, including each node in a VAXcluster
or VMScluster. An RMU Monitor command controls only the monitor
process running on the system from which the command is issued.
The Oracle Rdb Monitor Process controls all database access and
initiates the automatic recovery procedure when necessary.
19.1 – Reopen Log
Closes the current Oracle Rdb monitor log file, compresses it,
and opens another one without stopping the monitor.
19.1.1 – Description
The RMU Monitor Reopen_Log command closes the current Oracle
Rdb monitor log file, compresses it, and opens another log file
without stopping the monitor. The new log has the same name as,
but a new version number of, the monitor log file you opened with
the RMU Monitor Start command. Use the RMU Show Users command to
determine the current name and location of the monitor log file
before issuing the RMU Monitor Reopen_Log command. You should use
the RMU Monitor Reopen_Log command if the monitor log file gets
too large. For example, if you are running out of space on your
disk or if database performance slows, you might want to open
another log file.
If the disk that contains the Oracle Rdb monitor log file
becomes full, you must acquire space on the disk. Once there
is sufficient space on this disk, use the RMU Monitor Reopen_Log
command and consider backing up (using the DCL COPY command or
the OpenVMS Backup utility) the old monitor log file.
When the disk that contains the monitor log becomes full, Oracle
Rdb stops writing to the log file, but the Oracle Rdb system
does not stop operating. A message is sent to the cluster system
operator when this occurs.
19.1.2 – Format
(B)0[m RMU/Monitor Reopen_Log
19.1.3 – Usage Notes
o To use the RMU Monitor Reopen_Log command, either you must
have the OpenVMS SETPRV privilege or the OpenVMS WORLD,
CMKRNL, DETACH, PSWAPM, ALTPRI, SYSGBL, SYSNAM, SYSPRV, and
BYPASS privileges.
19.1.4 – Examples
Example 1
The following example closes the existing monitor log file,
compresses it, and creates a new one without stopping the Oracle
Rdb monitor:
$ RMU/MONITOR REOPEN_LOG
See the Oracle Rdb Guide to Database Maintenance for more
examples that show the RMU Monitor commands.
19.2 – Start
Activates the Oracle Rdb monitor process.
19.2.1 – Description
The RMU Monitor Start command activates the Oracle Rdb monitor
process (RDMS_MONITORnn, where nn represents the version Oracle
Rdb), sets the priority of this process, and specifies a device,
directory and file name in which to create the monitor log
file. If the monitor process is active already, you receive the
following error message:
%RMU-F-MONMBXOPN, monitor is already running
An Oracle Rdb monitor process must be running on a node for
users logged in to that node to use any Oracle Rdb database.
In a VMScluster environment, a monitor process must be running on
each node in the cluster from which databases are accessed.
The Oracle Rdb monitor process controls all database access and
initiates the automatic database recovery procedure following a
system failure or other abnormal termination of a database user
process.
See the Oracle Rdb Installation and Configuration Guide for
information on support for multiple versions of Oracle Rdb.
19.2.2 – Format
(B)0[m RMU/Monitor Start
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Output = file-name x /Output=SYS$SYSTEM:RDMMON.LOG
/Priority = integer x /Priority = 15
/[No]Swap x /Noswap
19.2.3 – Command Qualifiers
19.2.3.1 – Output
Output=file-name
Specifies the device, directory, and file name that receives the
monitor log. You can use this qualifier to redirect the placement
of your monitor log file. The default device and directory is the
SYS$SYSTEM directory. The default log file name is RDMMON.LOG.
The RMU Monitor Start command causes a new version of the log
file to be created for each database session.
19.2.3.2 – Priority
Priority=integer
Specifies the base priority of the monitor process. This priority
should always be higher than the highest database user process
priority.
By default, the monitor runs at the highest interactive priority
possible, 15. You should not normally have to lower the monitor
process priority. If you change this to a lower priority, an
attach operation can cause a deadlock. Deadlock occurs when
multiple processes with higher priority than the monitor attempt
to attach at the same time. In this case, the monitor must
contend for CPU time with multiple higher-priority processes
and is perpetually locked out. As a result, no one can use the
database.
19.2.3.3 – Swap
Swap
Noswap
Enables or disables swapping of the monitor process. The default
is Noswap. The Swap qualifier is not recommended for time-
critical applications, because no one can use the database while
the monitor process is being swapped.
19.2.4 – Usage Notes
o To use the RMU Monitor Start command, you must have either the
OpenVMS SETPRV privilege or the OpenVMS WORLD, CMKRNL, DETACH,
PSWAPM, ALTPRI, PRMMBX, SYSGBL, SYSNAM, SYSPRV, and BYPASS
privileges.
o If the monitor has not been started on the system previously,
use the RMONSTART.COM command file (which, by default, is
located in the SYS$STARTUP directory) instead of the RMU
Monitor Start command.
o Start the monitor from the SYSTEM account, which has the
SETPRV privilege. The process starting the monitor attempts
to give RDMS_MONITOR all privileges. In particular, the
privileges required are ALTPRI, CMKRNL, DETACH, PSWAPM,
PRMMBX, SETPRV, SYSGBL, SYSNAM, and WORLD.
o The monitor process inherits some quotas, such as MAXDETACH,
and the user name of the user who starts it. This can result
in severe restrictions on user access. For example, if the
user who starts the monitor has a MAXDETACH quota of two, then
the monitor can only start two recovery processes at one time.
However, the system defines most of the quotas needed by the
monitor.
o If the LNM$PERMANENT_MAILBOX table is not defined in the
LNM$SYSTEM_TABLE logical name table, either of the following
might occur:
- The RMU Start Monitor command hangs
- You receive the error, "monitor is not running", when you
know it is.
By default, the LNM$PERMANENT_MAILBOX table is defined in the
LNM$SYSTEM_TABLE logical name table. However, sometimes a user
or third-party application redefines the LNM$PERMANENT_MAILBOX
table in another logical name table (such as the LNM$GROUP
table). To recover from this situation, follow these steps:
1. Define the LNM$PERMANENT_MAILBOX table in the
LNM$SYSTEM_TABLE:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$PERMANENT_MAILBOX -
_$ LNM$SYSTEM
2. Start the database monitor:
RMU/MONITOR START
3. Start the application
Or, change the application that redefines the LNM$PERMANENT_
MAILBOX table so that LNM$PERMANENT_MAILBOX is defined as a
search list that includes the LNM$SYSTEM_TABLE table, as shown
in the following example:
$ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$PERMANENT_MAILBOX -
_$ LNM$GROUP, LNM$SYSTEM
o Use the RMU Show System command to determine the location of
the monitor log file if it is not in the default location. The
monitor log file may not be in the default location if someone
has issued the RMU Monitor Start command and specified a
location different from the default with the Output qualifier.
o The monitor process should only be started by a user whose
account has adequate quotas. Ideally, the monitor process
should be started from the SYSTEM account.
o To view the contents of monitor log file online (even
when disk-based logging is disabled because of disk space
problems), use the Performance Monitor and select the Monitor
Log screen from the Per-Process menu. See the Oracle Rdb7
Guide to Database Performance and Tuning or the Performance
Monitor Help for information about using the Performance
Monitor.
19.2.5 – Examples
Example 1
The following command activates the Oracle Rdb monitor process:
$ RMU/MONITOR START
See the Oracle Rdb Guide to Database Maintenance for more
examples that show the RMU Monitor commands.
19.3 – Stop
Stops the Oracle Rdb monitor process.
19.3.1 – Description
The RMU Monitor Stop command stops the Oracle Rdb monitor process
(RDMS_MONITORnn, where nn represents the version Oracle Rdb)
normally, either with a shutdown and rollback of the databases or
an immediate abort. You can use the RMU Monitor Stop command
to shut down all database activity on your node, optionally
aborting user processes by forcing an image exit or deleting
their processes.
The RMU Monitor Stop command closes the monitor log file also.
An Oracle Rdb monitor process must be running on a node for
users logged in to that node to use any Oracle Rdb database.
In a VMScluster environment, a monitor process must be running on
each node in the cluster from which databases is accessed.
The Oracle Rdb monitor process controls all database access and
initiates the automatic database recovery procedure following a
system failure or other abnormal termination of a database user
process. The monitor log file automatically tracks all access to
the database.
19.3.2 – Format
(B)0[m RMU/Monitor Stop
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Abort[={Forcex | Delprc}] x /NOABORT
/[No]Wait x /NOWAIT
19.3.3 – Command Qualifiers
19.3.3.1 – Abort
Abort=Forcex
Abort=Delprc
Noabort
The Abort=Forcex qualifier stops the monitor immediately without
allowing current Oracle Rdb users to complete active transactions
or detach from their databases. However, the user processes are
not deleted. Active transactions are rolled back. If a process
using a database is waiting for a subprocess to complete, the
transaction is not rolled back until the subprocess completes.
Using the Abort qualifier with no option is equivalent to
specifying the Abort=Forcex qualifier.
The Abort=Delprc qualifier stops the monitor immediately without
allowing current Oracle Rdb users to complete active transactions
or detach from their databases. Each user process that was
attached to an Oracle Rdb database is deleted immediately.
The Noabort qualifier allows current user processes to continue
and complete before stopping. New users on the node are not
allowed to attach to any database, but existing database users
can complete their sessions normally. Once existing database user
processes terminate, the database monitor shuts down.
The Noabort qualifier is the default.
19.3.3.2 – Wait
Wait
Nowait
Specifies whether the Oracle RMU operation completes when the
monitor acknowledges the stop request (Nowait), or whether RMU
waits until the monitor finishes shutting down (Wait).
The default is Nowait.
19.3.4 – Usage Notes
o To use the RMU Monitor Stop command, you must have either the
OpenVMS SETPRV privilege or the OpenVMS WORLD, CMKRNL, DETACH,
PSWAPM, PRMMBX, ALTPRI, SYSGBL, SYSNAM, SYSPRV, and BYPASS
privileges.
NOTE
If Oracle Trace is installed on your system, you stall
the Oracle Rdb monitor process with the RMU Monitor Stop
command unless you do one of the following:
- Shut down Oracle Trace, then shut down the Oracle Rdb
monitor (in that order).
- Use the RMU Monitor Stop command with the Abort=Delprc
qualifier to shut down Oracle Rdb and force the
monitor out of the Oracle Trace database.
19.3.5 – Examples
Example 1
The following command causes the Oracle Rdb monitor process to
shut down after existing database users end their access to the
database. New users on this node are unable to attach to any
Oracle Rdb database.
$ RMU/MONITOR STOP
Example 2
The following command causes the Oracle Rdb monitor to stop
immediately without allowing current Oracle Rdb users to
complete active transactions (they are rolled back) or detach
(DISCONNECT) from their databases. However, the user processes
are not deleted. Because the monitor is shut down, all Oracle Rdb
activity on this node is terminated.
$ RMU/MONITOR STOP /ABORT=FORCEX
Example 3
The following command causes the Oracle Rdb monitor to stop
immediately without allowing current Oracle Rdb users to
complete active transactions (they are not rolled back) or
detach (DISCONNECT) from their databases. Each user process that
was attached to a Oracle Rdb database on this node is deleted
immediately.
$ RMU/MONITOR STOP /ABORT=DELPRC
20 – Move Area
Permits you to move one or more storage areas to different disks.
You can also choose to move the database root file to a different
disk.
20.1 – Description
The RMU Move_Area command lets you modify certain area parameters
when the move operation is performed. All the files are processed
simultaneously during the move operation. The performance of
the RMU Move_Area command is similar to that of the RMU Backup
command, and it eliminates the need for intermediate storage
media.
Note that when a snapshot file is moved, Oracle RMU does not
actually move the snapshot file; instead, Oracle RMU re-creates
and initializes the snapshot file in the specified location. See
the description of the Snapshot qualifier for more information
about using this qualifier, including information on its proper
usage.
NOTE
You must perform a full and complete Oracle RMU backup
operation immediately after the Oracle RMU move area
operation completes to ensure that the database can be
properly restored after a database failure or corruption.
20.2 – Format
(B)0[mRMU/Move_Area root-file-spec storage-area-list
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]After_Journal[=file-spec] x See description
/[No]Aij_Options[=journal-opts-file] x See description
/All_Areas x See description
/[No]Area x See description
/[No]Cdd_Integrate x Nocdd_Integrate
/[No]Checksum_Verification x /Checksum_Verification
/Directory=directory-spec x None
/[No]Log x Current DCL verify value
/Nodes_Max=n x Keep current value
/[No]Online x Noonline
/Option=file-spec x None
/Page_Buffers=n x n=3
/Path=cdd-path x Existing value
/[No]Quiet_Point x /Quiet_Point
/Root=file-spec x None
/Threads=n x /Threads=10
/Users_Max=n x Keep current value
(B)0[m x
[4mFile[m [4mor[m [4mArea[m [4mQualifiers[m x [4mDefaults[m
x
/Blocks_Per_Page=n x None
/Extension={Disable | Enable } x Current value
/File=file-spec x None
/Read_Only x Current value
/Read_Write x Current value
/Snapshots=(Allocation=n,File=file-spec) x None
/[No]Spams x Leave attribute unchanged
/Thresholds=(n,n,n) x None
20.3 – Parameters
20.3.1 – root-file-spec
The name of the database root file for the database whose storage
areas you want to move.
20.3.2 – storage-area-list
The name of one or more storage areas that you want to move.
20.4 – Command Qualifiers
20.4.1 – After Journal
After_Journal[=file-spec]
Noafter_Journal
NOTE
This qualifier is maintained for compatibility with versions
of Oracle Rdb prior to Version 6.0. You might find it more
useful to specify the Aij_Options qualifier, unless you are
only interested in creating extensible after-image journal
(.aij) files.
Specifies how Oracle RMU is to handle after-image journaling and
.aij file creation, using the following rules:
o If you specify the After_Journal qualifier and provide a file
specification, Oracle RMU enables after-image journaling and
creates a new extensible after-image journal (.aij) file for
the database.
o If you specify the After_Journal qualifier but do not
provide a file specification, Oracle RMU enables after-image
journaling and creates a new extensible .aij file for the
database with the same name as, but a different version number
from, the .aij file for the database root file being moved.
o If you specify the Noafter_Journal qualifier, Oracle RMU
disables after-image journaling and does not create a new
.aij file.
o If you do not specify an After_Journal, Noafter_Journal,
Aij_Options, or Noaij_Options qualifier, Oracle RMU retains
the original journal setting (enabled or disabled) and the
original .aij file state.
You can only specify one, or none, of the following after-image
journal qualifiers in a single RMU Move_Area command: After_
Journal, Noafter_Journal, Aij_Options, or Noaij_Options.
You cannot use the After_Journal qualifier to create fixed-size
.aij files; use the Aij_Options qualifier.
You can facilitate recovery by creating a new .aij file because a
single .aij file cannot be applied across a move area operation
that changes an area page size. A single .aij file cannot be
applied across a move operation because the move operation is
never recorded in the .aij file (and therefore the increase in
page size is also not journaled). Therefore, when you attempt
to recover the database, the original page size is used for
recovery purposes. So, if the .aij file contains database insert
transactions, these updates might have more free space associated
with them than is available on the original page size. This
results in an inability to recover the insert transaction, which
in turn results in a bugcheck and a corrupted database.
This qualifier is valid only when no users are attached to the
database and only when the root file is moved.
20.4.2 – Aij Options
Aij_Options[=journal-opts-file]
Noaij_Options
Specifies how Oracle RMU is to handle after-image journaling and
.aij file creation, using the following rules:
o If you specify the Aij_Options qualifier and provide a
journal-opts-file, Oracle RMU enables journaling and creates
the .aij file or files you specify for the database. If
only one .aij file exists for the database, it will be an
extensible .aij file. If two or more .aij files are created
for the database, they will be fixed-size .aij files (as long
as at least two .aij files are always available).
o If you specify the Aij_Options qualifier but do not provide a
journal-opts-file, Oracle RMU disables journaling and does not
create any new .aij files.
o If you specify the Noaij_Options qualifier, Oracle RMU retains
the original journal setting (enabled or disabled) and retains
the original .aij file.
o If you do not specify an After_Journal, Noafter_Journal,
Aij_Options, or Noaij_Options qualifier, Oracle RMU retains
the original journal setting (enabled or disabled) and the
original .aij file state.
See Show After_Journal for information on the format of a
journal-opts-file.
Note that you cannot use the RMU Move_Area command with the
Aij_Options qualifier to alter the journal configuration.
However, you can use it to define a new after-image journal
configuration. When you use it to define a new after-image
journal configuration, it does not delete the journals in the
original configuration. Those can still be used for recovery.
If you need to alter the after-image journal configuration, you
should use the RMU Set After_Journal command.
The Aij_Options qualifier is valid only when no users are
attached to the database and only when the root file is moved.
20.4.3 – All Areas
All_Areas
Noarea
Specifies that all database storage areas are to be moved. If
you specify the All_Areas qualifier, you do not need to specify a
storage-area-list.
By default, only areas specified in the storage-area-list are
moved.
20.4.4 – Area
Area
Noarea
NOTE
Due to the confusing semantics of the Area and Noarea
qualifiers, the Area and Noarea qualifiers are deprecated.
Oracle Corporation recommends that you use one of the
following methods to specify areas to be moved:
o To move all the storage areas in the database use the
All_Areas qualifier and do not specify a storage-area-
list parameter
o To move only selected areas in the database, specify the
storage-area-list parameter or use the Options qualifier
and specify an options file.
o To move only the database root file for a multifile
database, or to move an entire single-file database,
specify the root qualifier and do not specify a storage-
area-list parameter.
Controls whether specific storage areas are moved. If you specify
the Area qualifier, only the storage areas specified in the
option file or the storage-area-list are moved. If you specify
Noarea, all the storage areas in the database are moved.
The default is the Area qualifier.
20.4.5 – Cdd Integrate
Cdd_Integrate
Nocdd_Integrate
Integrates the metadata from the root (.rdb) file of the moved
database into the data dictionary (assuming the data dictionary
is installed on your system).
If you specify the Nocdd_Integrate qualifier, no integration
occurs during the move operation.
You can use the Nocdd_Integrate qualifier even if the DICTIONARY
IS REQUIRED clause was used when the database being moved was
defined.
The Cdd_Integrate qualifier integrates definitions in one
direction only-from the database file to the dictionary. The
Cdd_Integrate qualifier does not integrate definitions from the
dictionary to the database file.
The Nocdd_Integrate qualifier is the default.
20.4.6 – Checksum Verification
Checksum_Verification
Nochecksum_Verification
Requests that the page checksum be verified for each page moved.
The default is to perform this verification.
The Checksum_Verification qualifier uses CPU resources but can
provide an extra measure of confidence in the quality of the data
being moved.
Use of the Checksum_Verification qualifier offers an additional
level of data security when the database employs disk striping
or RAID (redundant arrays of inexpensive disks) technology. These
technologies fragment data over several disk drives, and use
of the Checksum_Verification qualifier permits Oracle RMU to
detect the possibility that the data it is reading from these
disks has been only partially updated. If you use either of these
technologies, you should use the Checksum_Verification qualifier.
Oracle Corporation recommends that you use the Checksum_
Verification qualifier with all move operations where integrity
of the data is essential.
20.4.7 – Directory
Directory=directory-spec
Specifies the destination directory for the moved database files.
Note that if you specify a file name or file extension, all moved
files are given that file name or file extension. There is no
default directory specification for this qualifier.
See the Usage Notes for information on how this qualifier
interacts with the Root, File, and Snapshot qualifiers and for
warnings regarding moving database files into a directory owned
by a resource identifier.
If you do not specify this qualifier, Oracle RMU attempts to move
all the database files (unless they are qualified with the Root,
File, or Snapshot qualifier) to their current location.
20.4.8 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request log output and
the Nolog qualifier to prevent it. If you specify neither, the
default is the current setting of the DCL verify switch. (The DCL
SET VERIFY command controls the DCL verify switch.)
20.4.9 – Nodes Max
Nodes_Max=n
Specifies a new value for the database maximum node count
parameter. The default is to leave the value unchanged.
Use the Nodes_Max qualifier only if you move the database root
file.
20.4.10 – Online
Online
Noonline
Allows the specified storage areas to be moved without taking
the database off line. This qualifier can be used only when you
specify the storage-area-list parameter, or when you specify the
Options=file-spec qualifier. The default is Noonline. You cannot
move a database root file when the database is on line. The Root
qualifier cannot be specified with the Online qualifier in an RMU
Move_Area command.
20.4.11 – Option
Option=file-spec
Specifies an options file containing storage area names, followed
by the storage area qualifiers that you want applied to that
storage area. Do not separate the storage area names with commas.
Instead, put each storage area name on a separate line in the
file. The storage area qualifiers that you can include in the
options file are:
Blocks_Per_Page
File
Snapshot
Thresholds
If you specify the Snapshot qualifier, you must also move the
corresponding data files at the same time. To move a snapshot
file independently of its corresponding data file, use the RMU
Repair command with the Initialize=Snapshots=Confirm qualifier.
You can use the DCL line continuation character, a hyphen (-),
or the comment character (!) in the options file.
There is no default for this qualifier. Example 3 in the Examples
help entry under this command shows the use of an options file.
If the Option qualifier is specified, the storage-area-list
parameter is ignored.
20.4.12 – Page Buffers
Page_Buffers=n
Specifies the number of buffers to be allocated for each file
to be moved. The number of buffers used is twice the number
specified; half are used for reading the file and half for
writing the moved files. Values specified with the Page_Buffers
qualifier can range from 1 to 5. The default value is 3. Larger
values might improve performance, but they increase memory usage.
20.4.13 – Path
Path=cdd-path
Specifies a data dictionary path into which the definitions of
the moved database will be integrated. If you do not specify
the Path qualifier, Oracle RMU uses the CDD$DEFAULT logical name
value of the user who enters the RMU Move_Area command.
If you specify a relative path name, Oracle Rdb appends the
relative path name you enter to the CDD$DEFAULT value. If the
cdd-path parameter contains nonalphanumeric characters, you must
enclose it within quotation marks ("").
Oracle Rdb ignores the Path qualifier if you use the Nocdd_
Integrate qualifier or if the data dictionary is not installed
on your system.
20.4.14 – Quiet Point
Quiet_Point
Noquiet_Point
Allows you to specify that a database move operation is to occur
either immediately or when a quiet point for database activity
occurs. A quiet point is defined as a point where no active
update transactions are in progress in the database.
When you specify the Noquiet_Point qualifier, Oracle RMU proceeds
with the move operation as soon as the RMU Move_Area command is
issued, regardless of any update transaction activity in progress
in the database. Because Oracle RMU must acquire exclusive locks
on the physical and logical areas for the areas being moved,
the move operation fails if there are any active transactions
with exclusive locks on storage areas that are being moved.
However, once Oracle RMU has successfully acquired all the needed
concurrent-read storage area locks, it should not encounter any
further lock conflicts. If a transaction is started that causes
Oracle Rdb to request exclusive locks on the areas that are in
the process of being moved, that transaction either waits or
gets a lock conflict error, but the move area operation continues
unaffected.
If you intend to use the Noquiet_Point qualifier with a move
procedure that previously specified the Quiet_Point qualifier
(or did not specify either the Quiet_Point or the Noquiet_Point
qualifier), you should examine any applications that execute
concurrently with the move operation. You might need to modify
your applications or your move procedure to handle the lock
conflicts that can occur when you specify the Noquiet_Point
qualifier.
When you specify the Quiet_Point qualifier, the move operation
begins when a quiet point is reached.
The default is Quiet_Point.
20.4.15 – Root
Root=file-spec
Requests that the database root file be moved to the specified
location. If not specified, the database root file is not moved.
You must specify the Root qualifier when you use the RMU Move_
Area command on a single-file database. If you omit the Root
qualifier, you receive an error message. When you specify the
Root qualifier, specify the location where you want the root file
moved. For example:
$ RMU/MOVE_AREA/ROOT=DISK1:[DATABASE.TEST] MF_PERSONNEL
See the Usage Notes for information on how this qualifier
interacts with the Directory, File, and Snapshot qualifiers.
20.4.16 – Threads=number
Threads=number
Specifies the number of reader threads to be used by the move
process.
RMU creates so called internal 'threads' of execution to read
data from one specific storage area. Threads run quasi-parallel
within the process executing the RMU image. Each thread generates
its own I/O load and consumes resources like virtual address
space and process quotas (e.g. FILLM, BYTLM). The more threads,
the more I/Os can be generated at one point in time and the more
resources are needed to accomplish the same task.
Performance increases with more threads due to parallel
activities which keeps disk drives busier. However, at a certain
number of threads, performance suffers because the disk I/O
subsystem is saturated and I/O queues build up for the disk
drives. Also the extra CPU time for additional thread scheduling
overhead reduces the overall performance. Typically 2-5 threads
per input disk drive are sufficient to drive the disk I/O
susbsystem at its optimum. However, some controllers may be
able to handle the I/O load of more threads, for example disk
controllers with RAID sets and extra cache memory.
In a move operation, one thread moves the data of one storage
area at a time. If there are more storage areas to be moved than
there are threads, then the next idle thread takes on the next
storage area. Storage areas are moved in order of the area size
- largest areas first. This optimizes the overall elapsed time
by allowing other threads to move smaller areas while an earlier
thread is still working on a large area. If no threads qualifier
is specified then 10 threads are created by default. The minimum
is 1 thread and the maximum is the number of storage areas to be
moved. If the user specifies a value larger than the number of
storage areas, then RMU silently limits the number of threads to
the number of storage areas.
For a move operation, you can specify a threads number as low as
1. Using a threads number of 1 generates the smallest system
load in terms of working set usage and disk I/O load. Disk
I/O subsystems most likely can handle higher I/O loads. Using
a slightly larger value than 1 typically results in faster
execution time.
20.4.17 – Users Max
Users_Max=n
Specifies a new value for the database maximum user count
parameter.
The default is to leave the value unchanged.
Use the Users_Max qualifier only if you move the database root
file.
20.4.18 – Blocks Per Page
Blocks_Per_Page=n
Specifies a new page size for the storage area to which it is
applied. You cannot decrease the page size of a storage area.
If you attempt to change the page size during an online Move_
Area operation, you might receive a PAGESIZETOOBIG error message.
Changing the page size sometimes requires that Oracle Rdb change
the buffer size for the database also (because buffers must be
large enough to hold at least one page from each area). However,
the buffer size cannot change if other users are accessing the
database.
You might want to increase the page size in storage areas
containing hash indexes that are close to full. By increasing
the page size in such a situation, you prevent the storage area
from extending.
The Blocks_Per_Page qualifier is a positional qualifier.
20.4.19 – Extension
Extension=Disable
Extension=Enable
Allows you to change the automatic file extension attribute when
you move a storage area.
Use the Extension=Disable qualifier to disable automatic file
extensions for one or more storage areas.
Use the Extension=Enable qualifier to enable automatic file
extensions for one or more storage areas.
If you do not specify the Extension=Disable or the
Extension=Enable qualifier, the storage areas is moved with the
automatic file extension attributes that are currently in effect.
The Extension qualifier is a positional qualifier.
20.4.20 – File
File=file-spec
Requests that the storage area to which this qualifier is applied
be moved to the specified location.
The File qualifier is a positional qualifier. This qualifier is
not valid for single-file databases.
See the Usage Notes for information on how this qualifier
interacts with the Root, Snapshot, and Directory qualifiers.
20.4.21 – Read Only
Use the Read_Only qualifier to change a read/write storage area
or a write-once storage area to a read-only storage area.
If you do not specify the Read_Only or the Read_Write qualifier,
the storage areas are moved with the read/write attributes that
are currently in effect for the database.
This is a positional qualifier.
20.4.22 – Read Write
Use the Read_Write qualifier to change a read-only storage area
or a write-once storage area to a read/write storage area.
If you do not specify the Read_Only or the Read_Write qualifier,
the storage areas are moved with the read/write attributes that
are currently in effect for the database.
This is a positional qualifier.
20.4.23 – Snapshots
Snapshots=(Allocation=n,File=file-spec)
Allows you to specify a new snapshot file allocation size, a new
snapshot file location, or both, for the storage area to which
the qualifier is applied.
Use the Allocation=n option to specify the snapshot file
allocation size in n pages; use the File=file-spec option to
specify a new file location for the snapshot file associated with
the area being moved.
Note that when you specify a new file location for the snapshot
file, the snapshot file is not actually moved; instead, Oracle
RMU creates and initializes a new snapshot file in the specified
directory. However, if a snapshot file is accidentally deleted or
becomes corrupt, using this qualifier is not the recommended or
supported method for re-creating the snapshot file. Use the RMU
Repair command instead. See the Repair help entry for information
on using the RMU Repair command to re-create and initialize a
deleted or corrupted snapshot file.
If the keyword Allocation is omitted, the original allocation is
used, not the storage area's current allocation size.
You cannot specify a snapshot file name for a single-file
database. When you create a snapshot file, Oracle Rdb does not
store the file specification of the snapshot file. Instead, it
uses the file specification of the root file (.rdb) to determine
the file specification of the snapshot file.
See the Usage Notes for information on placing a snapshot file on
a different device or directory when your database is a single-
file database and for information on how this qualifier interacts
with the Root, File, and Directory qualifiers.
The Snapshot qualifier is a positional qualifier.
20.4.24 – Spams
Spams
Nospams
Specifies whether to enable the creation of space area management
(SPAM) pages or to disable the creation of SPAM pages (Nospams)
for specified storage areas when converting read/write storage
areas to write-once storage areas or vice versa. This qualifier
is not permitted with a storage area that has a uniform page
format.
When SPAM pages are disabled in a read/write storage area, the
SPAM pages are initialized, but they are not updated.
The Spams qualifier is a positional qualifier.
20.4.25 – Thresholds
Thresholds=(n,n,n)
Specifies new SPAM thresholds for the storage area to which it is
applied (for a mixed page format storage area). The thresholds of
a storage area with a uniform page format cannot be changed.
See the Oracle Rdb7 Guide to Database Performance and Tuning for
information on setting SPAM thresholds.
The Thresholds qualifier is a positional qualifier.
20.5 – Usage Notes
o To use the RMU Move_Area command for a database, you must have
the RMU$MOVE privilege in the root file access control list
(ACL) for the database or have the OpenVMS SYSPRV or BYPASS
privilege.
o You cannot disable extensions of snapshot (.snp) files.
o The parameter (file and area) qualifiers for the RMU Move_Area
command have positional semantics. See the Command_Qualifiers
help entry for more information on parameter qualifiers.
o The RMU Move_Area command provides four qualifiers, Directory,
Root, File, and Snapshots, that allow you to specify the
target for the moved files. The target can be just a
directory, just a file name, or a directory and file name.
If you use all or some of these four qualifiers, apply them as
follows:
- If you want to move the database root, use the Root
qualifier to indicate the target for the moved database
root file.
- Use local application of the File qualifier to specify the
target for the moved storage area or areas.
- Use local application of the Snapshots qualifier to specify
the target for the moved snapshot file or files.
- Use the Directory qualifier to specify a default target
directory. The default target directory is the directory to
which all storage area and snapshot files not qualified
with the File or Snapshot qualifier are moved. It is
also the default directory for files qualified with the
Root, File, or Snapshot qualifier if the target for these
qualifiers does not include a directory specification.
Note the following when using these qualifiers:
- Global application of the File qualifier when the target
specification includes a file name causes Oracle RMU
to move all of the specified storage areas to different
versions of the same file name. This creates a database
that is difficult to manage.
- Global application of the Snapshot qualifier when the
target specification includes a file name causes Oracle RMU
to move all of the specified snapshot files to different
versions of the same file name. This creates a database
that is difficult to manage.
- Specifying a file name or extension with the Directory
qualifier is permitted, but causes Oracle RMU to move all
of the specified files (except those specified with the
File or Root qualifier) to different versions of the same
file name. Again, this creates a database that is difficult
to manage.
See Example 6.
o You must specify the Root qualifier when you use the RMU Move_
Area command on a single-file database. If you omit the Root
qualifier, you receive an error message. If you want to place
the snapshot file for a single-file database on a different
device or directory from the root file, Oracle Corporation
recommends that you create a multifile database. However,
you can work around this restriction by defining a search
list for a concealed logical name. (However, do not use a
nonconcealed rooted logical name to define database files; a
database created with a non-concealed rooted logical name can
be backed up, but may not restore correctly when you attempt
to restore the files to a new directory.)
To create a single-file database with a snapshot file on a
different device or directory from the root file, define a
search list by using a concealed logical name. Specify the
location of the root file as the first item in the search
list. When you create the database, use the logical name for
the directory specification. Then, copy the snapshot file
to the second device. The following example demonstrates the
workaround:
$ ! Define a concealed logical name.
$ DEFINE /TRANS=CONCEALED/SYSTEM TESTDB USER$DISK1:[DATABASE], -
_$ USER$DISK2:[SNAPSHOT]
$
$ SQL
SQL> ! Create the database.
SQL> !
SQL> CREATE DATABASE FILENAME TESTDB:TEST;
SQL> EXIT
$ !
$ ! Copy the snapshot file to the second disk.
$ COPY USER$DISK1:[DATABASE]TEST.SNP USER$DISK2:[SNAPSHOT]TEST.SNP
$ !
$ ! Delete the snapshot file from the original disk.
$ DELETE USER$DISK1:[DATABASE]TEST.SNP;
o There are no restrictions on the use of the Nospams qualifier
option with mixed page format storage areas, but the use of
the Nospams qualifier typically causes severe performance
degradation. The Nospams qualifier is useful only where
updates are rare and batched, and access is primarily by
database key (dbkey).
20.6 – Examples
Example 1
If a storage area is on a disk that is logging error messages,
you can move the storage area to another disk by using the RMU
Move_Area command. The following command moves the DEPARTMENTS
storage area (departments.rda) and the DEPARTMENTS snapshot
file (departments.snp) of the mf_personnel database to the
DDV21:[RICK.SQL] directory:
$ RMU/MOVE_AREA MF_PERSONNEL DEPARTMENTS /DIRECTORY=DDV21:[RICK.SQL]
Example 2
The following command moves the EMPIDS_LOW, EMPIDS_MID, and
EMPIDS_OVER storage areas for the mf_personnel database to the
DISK2:[USER2] directory. The Extension=Disable qualifier disables
automatic file extensions for the EMPIDS_LOW, EMPIDS_MID, and
EMPIDS_OVER storage area (.rda) files when they are moved to the
DISK2:[USER2] directory:
$ RMU/MOVE_AREA/EXTENSION=DISABLE/DIRECTORY=DISK2:[USER2] -
_$ mf_personnel EMPIDS_LOW,EMPIDS_MID,EMPIDS_OVER
Example 3
The following RMU Move_Area command uses an options file to
specify that the storage area files and snapshot files be moved
to different disks. Note that storage area snapshot (.snp)
files are located on different disks from one another and from
their associated storage area (.rda) files; this is recommended
for optimal performance. (This example assumes that the disks
specified for each storage area file in options_file.opt are
different from those where the storage area files currently
reside.)
$ RMU/MOVE_AREA/OPTIONS=OPTIONS_FILE.OPT MF_PERSONNEL
The following command displays the contents of the options file:
$ TYPE options_file.opt
EMPIDS_LOW /FILE=DISK1:[CORPORATE.PERSONNEL]EMPIDS_LOW.RDA -
/SNAPSHOT=(FILE=DISK2:[CORPORATE.PERSONNEL]EMPIDS_LOW.SNP)
EMPIDS_MID /FILE=DISK3:[CORPORATE.PERSONNEL]EMPIDS_MID.RDA -
/SNAPSHOT=(FILE=DISK4:[CORPORATE.PERSONNEL]EMPIDS_MID.SNP)
EMPIDS_OVER /FILE=DISK5:[CORPORATE.PERSONNEL]EMPIDS_OVER.RDA -
/SNAPSHOT=(FILE=DISK6:[CORPORATE.PERSONNEL]EMPIDS_OVER.SNP)
DEPARTMENTS /FILE=DISK7:[CORPORATE.PERSONNEL]DEPARTMENTS.RDA -
/SNAPSHOT=(FILE=DISK8:[CORPORATE.PERSONNEL]DEPARTMENTS.SNP)
SALARY_HISTORY /FILE=DISK9:[CORPORATE.PERSONNEL]SALARY_HISTORY.RDA -
/SNAPSHOT=(FILE=DISK10:[CORPORATE.PERSONNEL]SALARY_HISTORY.SNP)
JOBS /FILE=DISK7:[CORPORATE.PERSONNEL]JOBS.RDA -
/SNAPSHOT=(FILE=DISK8:[CORPORATE.PERSONNEL]JOBS.SNP)
EMP_INFO /FILE=DISK9:[CORPORATE.PERSONNEL]EMP_INFO.RDA -
/SNAPSHOT=(FILE=DISK10:[CORPORATE.PERSONNEL]EMP_INFO.SNP)
RESUME_LISTS /FILE=DISK11:[CORPORATE.PERSONNEL]RESUME_LISTS.RDA -
/SNAPSHOT=(FILE=DISK12:[CORPORATE.PERSONNEL]RESUME_LISTS.SNP)
RESUMES /FILE=DISK9:[CORPORATE.PERSONNEL]RESUMES.RDA -
/SNAPSHOT=(FILE=DISK10:[CORPORATE.PERSONNEL]RESUMES.SNP)
Example 4
The following RMU Move_Area command moves the database root for
the mf_personnel database and defines a new after-image journal
configuration, using the Aij_Options qualifier:
$ RMU/MOVE_AREA/ROOT=DISK1:[DATABASE.PERSONNEL]MF_PERSONNEL -
_$ /AIJ_OPTIONS=aij_config.opt MF_PERSONNEL/NOONLINE
The aij_config.opt file contains the following clauses:
JOURNAL IS ENABLED -
RESERVE 2 -
ALLOCATION IS 512 -
EXTENT IS 512 -
OVERWRITE IS DISABLED -
SHUTDOWN_TIMEOUT IS 120 -
NOTIFY IS DISABLED -
BACKUPS ARE MANUAL -
CACHE IS DISABLED
ADD AIJ1 -
FILE DISK2:[MFPERS_AIJ1]AIJ_ONE
ADD AIJ2 -
FILE DISK3:[MFPERS_AIJ2]AIJ_TWO
Example 5
The following example moves all the mf_personnel database storage
areas to the DISK3:[db] directory:
$ RMU/MOVE_AREA MF_PERSONNEL.RDB /ALL_AREAS/DIR=DISK3:[DB]
Example 6
The following example demonstrates the use of the Directory,
File, and Root qualifiers. In this example:
o The default directory is specified as DISK2:[DIR].
o The target directory and file name for the database root file
is specified with the Root qualifier. The target directory
specified with the Root qualifier overrides the default
directory specified with the Directory qualifier. Thus, Oracle
RMU moves the database root to DISK3:[ROOT] and names it
MOVEDRDB.RDB.
o The target directory for the EMPIDS_MID storage area is
DISK4:[FILE]. Oracle RMU moves EMPIDS_MID to DISK4:[FILE].
o The target file name for the EMPIDS_LOW storage area is
EMPIDS. Thus, Oracle RMU moves the EMPIDS_LOW storage area
to the DISK2 default directory (specified with the Directory
qualifier), and names the file EMPIDS.RDA.
o The target for the EMPIDS_LOW snapshot file is
DISK5:[SNAP]EMPIDS.SNP Thus, Oracle RMU moves the EMPIDS_LOW
snapshot file to DISK5:[SNAP]EMPIDS.SNP.
o All the other storage area files and snapshot files in the
mf_personnel database are moved to DISK2:[DIR]; the file names
for these storage areas remain unchanged.
$ RMU/MOVE_AREA DISK1:[DB]MF_PERSONNEL.RDB /ALL-
_$ /DIRECTORY=DISK2:[DIR] -
_$ /ROOT=DISK3:[ROOT]MOVEDRDB.RDB -
_$ EMPIDS_MID/FILE=DISK4:[FILE], -
_$ EMPIDS_LOW/FILE=EMPIDS -
_$ /SNAPSHOT=(FILE=DISK5:[SNAP]EMPIDS.SNP)
21 – Open
Opens a database root file and maps its global section to the
contents of an OpenVMS virtual address file. You can use the RMU
Open command in conjunction with the SQL ALTER DATABASE statement
to control access to the database. See the description of the
OPEN IS {AUTOMATIC | MANUAL} clause of the SQL ALTER DATABASE
statement in the Oracle Rdb SQL Reference Manual for details.
21.1 – Description
Once you use the RMU Open command to open a database, the
database remains open and mapped until you close it explicitly
with an RMU Close command and all users have exited the database
with the SQL DISCONNECT or EXIT statements. If you do not issue
the RMU Open command, the first user to attach to the database
incurs the cost of implicitly opening it and the last user to
detach from the database incurs the cost of implicitly closing
it.
The effect of the RMU Open command depends on whether you have
specified the OPEN IS AUTOMATIC or OPEN IS MANUAL clause to the
SQL ALTER DATABASE statement, as follows:
o OPEN IS AUTOMATIC
If you have specified automatic opening for your database,
users can invoke the database at any time without first
issuing an RMU Open command. (Although as mentioned above,
it is more efficient to explicitly open the database with an
RMU Open command and close it with an RMU Close command.)
o OPEN IS MANUAL
If you have specified manual opening for your database, the
RMU Open command must be issued before users can invoke the
database.
If you modify the database attribute from OPEN IS AUTOMATIC
to OPEN IS MANUAL, the modification takes effect only after
all users have detached from the database. (You can issue the
RMU/CLOSE/ABORT=FORCEX command to force all users to detach.)
Then, you must issue the RMU Open command before users can invoke
the database.
If you modify the database attribute from OPEN IS MANUAL to OPEN
IS AUTOMATIC, users can invoke the database at their discretion.
You do not have to issue the RMU Open command. However, if a
user has already opened the database manually when you make this
change to the database attribute, the modification takes effect
only after you manually close the database by issuing the RMU
Close command.
See the Oracle Rdb Guide to Database Maintenance for information
to help you decide whether to set your database attribute to
automatic or manual opening.
When you create a database, you have a choice of how to set up
buffers for database pages. You can choose either local or global
buffering. Global buffers can provide better system performance.
See the Oracle Rdb7 Guide to Database Performance and Tuning for
more information on setting the number of global buffers for your
system.
21.2 – Format
(B)0[m [4mCommand[m [4mQualifiers[m x [4mDefault[m
x
/Access=[Un]Restricted x See description
/Global_Buffers[=(Total=i,User_Limit=j)] x See description
/Path x None
/Row_Cache=Disable x See description
/[No]Statistics=Import x /Nostatistics
/[No]Wait x /Nowait
21.3 – Parameters
21.3.1 – root-file-spec
root-file-spec[,...]
Specifies the database to open. If the database root file is
open, you receive an informational message. The default file
extension is .rdb.
21.4 – Command Qualifiers
21.4.1 – Access
Access=Restricted
Access=Unrestricted
Permits the database administrator to open the database
and restrict access to it in order to perform maintenance
operations or to restructure the database without interference
from users who want to gain access. If access is restricted
(Access=Restricted), the DBADM privilege is required for SQL
access to the database. If the Access=Unrestricted qualifier is
specified, users without the DBADM privilege can attach to the
database.
NOTE
Do not confuse the Oracle RMU Access=Restricted qualifier
with the SQL restricted access clause (available for use
with the following SQL statements: ATTACH, CREATE, DECLARE
ALIAS, and IMPORT). When you specify the restricted access
clause in SQL, only one user can attach to the database;
when you specify the Access=Restricted qualifier using
Oracle RMU, any number of users with the DBADM privilege
can access the database.
Furthermore, note that an SQL SHOW DATABASE command
displays the phrase "No Restricted Access" or the phrase
"Restricted Access" if access has been restricted using the
SQL restricted access clause. However, SHOW DATABASE tells
you nothing about whether Oracle RMU has opened a database
with access restricted. Use the RMU Dump command to view the
Oracle RMU access setting.
Refer to the Oracle Rdb SQL Reference Manual for more
information on the SQL restricted access clause.
If you specify the RMU Open command without the Access qualifier,
Oracle RMU opens the database in the same access mode as the last
RMU Open command performed. If the database was last opened as
restricted, issuing the RMU Dump command results in the following
message being displayed:
Access restricted to privileged users
Use this form of the RMU Open command to open the database on
other nodes without changing the access mode.
The access mode is clusterwide and the last mode set with the RMU
Open command is used for the entire cluster.
For example, if you open the mf_personnel database on node A with
the Access=Unrestricted qualifier, and open the same database
on node B with the Access=Restricted qualifier, the database
has restricted access on both node A and node B. However, the
commands do not terminate any user processes that may have gained
access while the database was unrestricted.
The access mode is stored in the database. Consequently, if
the system fails while access is restricted, access remains
restricted unless the unrestricted mode is explicitly requested.
The RMU Backup, RMU Restore, and RMU Copy_Database commands also
preserve the access mode.
The RMU Close command does not alter the access mode. You can
change the mode by using the RMU Open command only. You can use
the RMU Open command to restrict access to any database, whether
it was opened as AUTOMATIC or MANUAL.
The Access qualifier is a positional qualifier.
21.4.2 – Global Buffers
Global_Buffers[=(Total=i,User_Limit=j)]
Allows you to set the basic global buffer parameters on each
RMU Open command. If you specify the Global_Buffers qualifier,
you can optionally specify values for the Total and User_Limit
parameters:
o Total is the number of global buffers per node to allocate for
this opened instance of the database (minimum = 5, and maximum
o User_Limit is the maximum number of global buffers to be
allotted to any given user (minimum = 5, maximum = Total).
The default values for Total and User_Limit are set by:
o The RMU Open command explicitly
o Values determined at the time the database was created
If you do not specify a value for the Total or User_Limit
options, the values are determined based on what they were when
the database was created.
If a database does not have global buffers enabled, the Global_
Buffers qualifier is ignored. Use the RMU Dump command to see
if global buffering is enabled or disabled. The RMU Dump command
also shows the global buffer count and the maximum global buffer
count per user. For example:
$ RMU/DUMP MF_PERSONNEL
*------------------------------------------------------------------
* Oracle Rdb V7.0-00 22-SEP-1995 10:11:51.14
*
* Dump of Database header
* Database: DISK1:[DATABASE]MF_PERSONNEL.RDB;1
*
*-------------------------------------------------------------------
Database Parameters:
Root filename is "DISK1:[DATABASE]MF_PERSONNEL.RDB;1"
Created at 7-APR-1994 16:50:09.01
Oracle Rdb structure level is 70.0
Maximum user count is 50
Maximum node count is 16
Database open mode is AUTOMATIC
Database close mode is AUTOMATIC
Database is available for READ WRITE access
Snapshot mode is NON-DEFERRED
Statistics are enabled
Storage Areas...
- Active storage area count is 10
- Reserved storage area count is 0
Buffers...
- Default user buffer count is 20
- Default recovery buffer count is 20
- Global buffers are enabled <--------
- Global buffer count is 250 <--------
- Maximum global buffer count per user is 5 <--------
- Buffer size is 6 blocks
.
.
.
Derived Data...
- Global section size
With global buffers disabled is 70962 bytes
With global buffers enabled is 975992 bytes
.
.
.
The Global_Buffers qualifier is a positional qualifier.
21.4.3 – Path
Path
Specifies the full or relative data dictionary path name in which
the definitions reside for the database you want to open.
The Path qualifier is a positional qualifier. The path name
cannot include wildcard characters.
21.4.4 – Row Cache=Disable
Disables row caching. This qualifier is provided for use with hot
standby databases. Row caching cannot be enabled on a hot standby
database while replication is active. If it is enabled, the hot
standby feature will not start.
21.4.5 – Statistics=Import
Statistics=Import
Nostatistics
Specifies that statistic information previously saved by using
the Statistics=Export qualifier on the RMU Close command is to be
loaded when the database is opened. The default is Nostatistics,
which indicates that statistic information is not loaded when the
database is opened.
After the database is opened using the Statistics=Import
qualifier, the saved statistics file is closed. The statistics
file is not automatically deleted. It can be deleted if it is no
longer needed.
When you use the Statistics=Import qualifier, statistics
information is automatically preserved in the event of abnormal
database closure. To ensure that the ondisk statistic information
files are accurate in the case of a node or monitor failure,
the statistic information files are checkpointed by the database
monitor every half-hour. The RMU Show Users command identifies
when the checkpoint for each database occurs.
The statistic files are not loaded if the physical schema of the
database has changed since the statistic file was created. This
means that the addition or deletion of storage aras, logical
areas, and record caches invalidate the statistic files. This
restriction prevents incorrect statistic information from being
loaded when intervening physical changes occur to the database.
Closing the database updates the statistic files and makes
them valid. Use the RMU Show Users command to verify that the
statistic information file was imported.
21.4.6 – Wait
Wait
Nowait
Specifies whether the system prompt should be returned before
the database is completely open and available. Specify the
Wait qualifier if you want the system prompt returned when the
database is completely open and available. Specify Nowait if you
want the system prompt returned immediately, regardless of the
state of the open operation.
The Nowait qualifier is the default.
21.5 – Usage Notes
o To use the RMU Open command for a database, you must have the
RMU$OPEN privilege in the root file access control list (ACL)
for the database or the OpenVMS WORLD privilege.
21.6 – Examples
Example 1
The following command opens the mf_personnel database:
$ RMU/OPEN MF_PERSONNEL
Example 2
The following command opens the mf_personnel database in the
WORK directory, all the databases in the .TEST directory, and the
databases specified by the path names CDD$TOP.FINANCE and SAMPLE_
DB:
$ RMU/OPEN DISK1:[WORK]MF_PERSONNEL, CDD$TOP.FINANCE/PATH, -
_$ DISK1:[TEST]*, SAMPLE_DB/PATH
Example 3
This command opens the mf_personnel database, sets the total
global buffers for this opened instance of the database, and sets
the maximum number of global buffers that can be given to any
user. This example limits the number of users who can access this
database at any given time to 2 (Total divided by User_Limit).
You may want to increase the values of Total and User_Limit.
$ RMU/OPEN MF_PERSONNEL/GLOBAL_BUFFERS=(TOTAL=10,USER_LIMIT=5)
If you define a user limit value that is greater than the value
you specify for Total, you receive an error message:
$ RMU/OPEN MF_PERSONNEL/GLOBAL=(TOTAL=5,USER_LIMIT=10)
%RMU-F-VALGTRMAX, value (10) is greater than maximum allowed
value (5) for GLOBAL_BUFFERS.USER_LIMIT
Example 4
This command disables row caching.
$ RMU/OPEN MF_PERSONNEL.RDB /ROW_CACHE=DISABLE
22 – Optimize
Optimize After_Journal
Optimizes a backed up after-image journal (.aij) file for
database recovery (rollforward) operations by eliminating
unneeded and duplicate journal records, and by ordering
journal records. An optimized .aij (.oaij) file created by the
RMU Optimize After_Journal command provides better recovery
performance for your database than an .aij file. A benefit of
this improved recovery performance is that the database is made
available to users sooner.
The RMU Optimize After_Journal command is used to read a backed
up .aij file on disk and write the .oaij file to tape or disk.
22.1 – Description
The RMU Optimize After_Journal command performs the following
optimizations to backed up .aij files:
o The .aij records from transactions that rolled back are
eliminated.
Because transactions that are rolled back in an .aij file are
not needed in a recovery operation, they are not part of an
optimized .aij file.
o Duplicate .aij records are eliminated.
Duplicate .aij records are .aij records that update the same
database record. During the rollforward of an .aij file,
duplicate .aij records cause a database record to be updated
multiple times. Each update supersedes the previous update,
meaning only the last update is relevant. Therefore, all but
the last update to a database record can be eliminated from an
.aij file.
o The .aij records are ordered by physical database key (dbkey).
Ordering .aij records by physical dbkey improves I/O
performance at recovery time.
See the Oracle Rdb Guide to Database Maintenance for further
description of optimizing .aij files.
The RMU Optimize After_Journal command has the following
restrictions:
o You can only optimize quiet-point .aij backup files.
o You cannot optimize a current .aij file.
o You cannot optimize an .oaij file.
NOTE
Because an .oaij file is not functionally equivalent to
the original .aij file, the original .aij file should not
be discarded after it has been optimized.
o You cannot use .oaij files with the following types of
recovery operations:
- By-area recovery operations (recovery operations that use
the RMU Recover command with the Areas qualifier).
- By-page recovery operations (recovery operations that use
the RMU Recover command with the Just_Corrupt qualifier).
- RMU Recover commands with the Until qualifier. The .oaij
file does not retain enough of the information from the
original .aij file for such an operation.
- Recovery operation where the database or any storage areas
(or both) are inconsistent with the .oaij file. A database
or storage area will be inconsistent with the .oaij file if
the transaction sequence number (TSN) of the last committed
transaction of the database or storage area is not equal
to the TSN of the last committed transaction in the open
record of the .aij file. The last committed TSN in the
.oaij file represents the last transaction committed to the
database at the time the original .aij file was created.
As a workaround for these restrictions against using .oaij
files in these recovery operations, use the original,
unoptimized .aij files in these recovery operations instead.
o Any .aij file that possibly contains incomplete transactions
cannot be optimized. Incomplete transactions can occur in an
.aij file under the following circumstances:
- The .aij file is backed up with a no-quiet-point backup
operation (because transactions can span .aij files)
Note that transactions in a fixed-size journal
configuration may span .aij files. Thus, if each journal
in a fixed-size journal configuration has been backed up on
a per-journal basis, the resulting files are equivalent to
a no-quiet-point .aij backup operation. These .aij backup
files cannot be optimized unless you perform a manual
quiet-point backup operation first. A quiet-point backup
operation forces a switch-over to another available .aij
file which ensures that no transaction spans two journal
files.
- The previous .aij file was backed up with a no-quiet-point
backup operation
- The .aij file has unresolved distributed transactions
There are no workarounds to these restrictions against
optimizing .aij files with incomplete transactions.
22.2 – Format
(B)0[mRMU/Optimize/After_Journal aij-file optimized-aij-file
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Accept_Label x /Noaccept_Label
/Active_IO=max-writes x /Active_IO=3
/Block_Size=integer x See description
/Crc[=Autodin_II] x See description
/Crc=Checksum x See description
/Nocrc x See description
/Density=density-value[,[No]Compaction]x See description
/Encrypt=({Value=|Name=}[,Algorithm=]) x See description
/Format={Old_File|New_Tape} x /Format=Old_File
/[No]Group_Size=interval x See description
/Label=(label-name-list) x See description
/Librarian[=options] x None
/[No]Log x Current DCL verify value
/[No]Media_Loader x See description
/Owner=user-id x See description
/Protection[=openvms-file-protection] x See description
22.3 – Parameters
22.3.1 – aij-file
The name of the .aij file that you want to optimize. It cannot be
a current .aij file.
The default file extension is .aij.
22.3.2 – optimized-aij-file
The name of the optimized .oaij file to be produced by the RMU
Optimize After_Journal command.
The default file extension is .oaij.
22.4 – Command Qualifiers
22.4.1 – Accept Label
Accept_Label
Specifies that Oracle RMU should keep the current tape label it
finds on a tape during an optimize-to-tape operation even if that
label does not match the default label or that specified with
the Label qualifier. Operator notification does not occur unless
the tape's protection, owner, or expiration date prohibit writing
to the tape. However, a message is logged (assuming logging is
enabled) to indicate that a label is being preserved and which
drive currently holds that tape.
This qualifier is particularly useful when your optimize-to-tape
operation employs numerous previously used (and thus labeled)
tapes and you want to preserve the labels currently on the tapes.
If you do not specify this qualifier, the default behavior
of Oracle RMU is to notify the operator each time it finds a
mismatch between the current label on the tape and the default
label (or the label you specify with the Label qualifier).
See the description of the Labels qualifier under this command
for information on default labels. See How Tapes are Relabeled
During a Backup Operation in the Usage_Notes help entry under
the Backup Database help entry for a summary of which labels are
applied under a variety of circumstances.
22.4.2 – Active IO
Active_IO=max-writes
Specifies the maximum number of write operations to the .oaij
file device that the RMU Optimize After_Journal command will
attempt simultaneously. This is not the maximum number of write
operations in progress; that value is the product of active
system I/O operations and the number of devices being written
to simultaneously.
The value of the Active_IO qualifier can range from 1 to 5. The
default value
is 3. Values larger than 3 might improve performance with some
tape drives.
22.4.3 – Block Size
Block_Size=integer
Specifies the maximum record size for the optimized .oaij file.
The size can vary between 2048 and 65,024 bytes. The default
value is device dependent. The appropriate block size is a
compromise between tape capacity and error rate. The block size
you specify must be larger than the largest page length in the
database.
22.4.4 – Crc[=Autodin II]
Crc[=Autodin_II]
Uses the AUTODIN-II polynomial for the 32-bit cyclic redundancy
check (CRC) calculation and provides the most reliable end-
to-end error detection. This is the default for NRZ/PE
(800/1600 bits/inch) tape drives.
Typing the Crc qualifier is sufficient to select the Crc=Autodin_
II option. It is not necessary to type the entire qualifier.
22.4.5 – Crc=Checksum
Crc=Checksum
Uses one's complement addition, which is the same computation
used to do a checksum of the AIJ data on disk. This is the
default for GCR (6250 bits/inch) tape drives and for TA78, TA79,
and TA81 drives.
The Crc=Checksum qualifier allows detection of data errors.
22.4.6 – Nocrc
Nocrc
Disables end-to-end error detection. This is the default for TA90
(IBM 3480 class) drives.
NOTE
The overall effect of the Crc=Autodin_II, Crc=Checksum, and
Nocrc defaults is to make tape reliability equal to that
of a disk. If you retain your tapes longer than 1 year,
the Nocrc default might not be adequate. For tapes retained
longer than 1 year, use the Crc=Checksum qualifier.
If you retain your tapes longer than 3 years, you should
always use the Crc=Autodin_II qualifier.
Tapes retained longer than 5 years could be deteriorating
and should be copied to fresh media.
See the Oracle Rdb Guide to Database Maintenance for details
on using the Crc qualifiers to avoid underrun errors.
22.4.7 – Density
Density=density-value[,[No]Compaction]
Specifies the density at which the output volume is to be
written. The default value is the format of the first volume (the
first tape you mount). You do not need to specify this qualifier
unless your tape drives support data compression or more than one
recording density.
The Density qualifier is applicable only to tape drives. Oracle
RMU returns an error message if this qualifier is used and the
target device is not a tape drive.
If your systems are running OpenVMS versions prior to 7.2-1,
specify the Density qualifier as follows:
o For TA90E, TA91, and TA92 tape drives, specify the number in
bits per inch as follows:
- Density = 70000 to initialize and write tapes in the
compacted format
- Density = 39872 or Density = 40000 for the noncompacted
format
o For SCSI (Small Computer System Interface) tape drives,
specify Density = 1 to initialize and write tapes, using the
drive's hardware data compression scheme.
o For other types of tape drives, you can specify a supported
Density value between 800 and 160,000 bits per inch.
o For all tape drives, specify Density = 0 to initialize and
write tapes at the drive's standard density.
Do not use the Compaction or NoCompaction keyword for systems
running OpenVMS versions prior to 7.2-1. On these systems,
compression is determined by the density value and cannot be
specified.
Oracle RMU supports the OpenVMS tape density and compression
values introduced in OpenVMS Version 7.2-1. The following table
lists the added density values supported by Oracle RMU.
DEFAULT 800 833 1600
6250 3480 3490E TK50
TK70 TK85 TK86 TK87
TK88 TK89 QIC 8200
8500 8900 DLT8000
SDLT SDLT320 SDLT600
DDS1 DDS2 DDS3 DDS4
AIT1 AIT2 AIT3 AIT4
LTO2 LTO3 COMPACTION NOCOMPACTION
If the OpenVMS Version 7.2-1 density values and the previous
density values are the same (for example, 800, 833, 1600, 6250),
the specified value is interpreted as an OpenVMS Version 7.2-1
value if the tape device driver accepts them, and as a previous
value if the tape device driver accepts previous values only.
For the OpenVMS Version 7.2-1 values that accept tape compression
you can use the following syntax:
/DENSITY = (new_density_value,[No]Compaction)
In order to use the Compaction or NoCompaction keyword, you must
use one of the following density values that accepts compression:
DEFAULT 3480 3490E 8200
8500 8900 TK87 TK88
TK89 DLT8000 SDLT SDLT320
AIT1 AIT2 AIT3 AIT4
DDS1 DDS2 DDS3 DDS4
SDLT600 LTO2 LTO3
Refer to the OpenVMS documentation for more information about
density values.
22.4.8 – Encrypt
Encrypt=({Value=|Name=}[,Algorithm=])
The Encrypt qualifier decrypts the backup file of the optimized
after-image journal file.
Specify a key value as a string or, the name of a predefined
key. If no algorithm name is specified the default is DESCBC.
For details on the Value, Name and Algorithm parameters see HELP
ENCRYPT.
This feature requires the OpenVMS Encrypt product to be installed
and licensed on this system.
This feature only works for a newer format backup file which has
been created using the Format=New_Tape qualifier. Therefore you
have to specify the Format=New_Tape qualifier with this command
if you also use the Encrypt qualifier.
Synonymous with the Format=Old_File and Format=New_Tape
qualifiers. See the description of those qualifiers.
22.4.9 – Format
Format=Old_File
Format=New_Tape
The Format qualifier allows you to specify the format of the
files written by the RMU Optimize After_Journal command.
If you specify the default, Format=Old_File, the RMU Optimize
After_Journal command writes files in RMS format. This format is
provided for compatibility with prior versions of Oracle Rdb. If
you specify Format=Old_File, you must mount the media by using
the DCL MOUNT command before you issue the RMU Optimize After_
Journal command. Because the RMU Optimize After_Journal command
will use RMS to write to the tape, the tape must be mounted as
an OpenVMS volume (that is, do not specify the /FOREIGN qualifier
with the MOUNT command).
If you specify FOREIGN access although your backup file was
created using the Format=Old_File qualifier, you will not receive
an error message. The tape will be considered unlabeled, and
thus the operation will process whatever data is at the current
position of the tape (labels, data, or something else). A
failure will occur, but what will fail and how it will fail is
unpredictable because the type of information that will be read
is unknown. The result is an unlabeled tape that can be difficult
to use for recovery operations.
If you specify Format=New_Tape, the RMU Optimize After_Journal
command writes .aij files in a format similar to that used by
an RMU Backup command. If you specify Format=New_Tape, you must
mount the media by using the DCL MOUNT command before you issue
the RMU Optimize After_Journal command. The tape must be mounted
as a FOREIGN volume.
The following tape qualifiers have meaning only when used in
conjunction with the Format=New_Tape qualifier:
Active_IO
Block_Size
Crc
Group_Size
Density
Label
Owner_Uic
Protection
Rewind
Tape_Expiration
Follow these steps when you optimize an .aij file to tape:
1. Use the RMU Backup After_Journal command with the Format=Old_
File qualifier to back up the .aij file to disk.
2. Use the RMU Optimize After_Journal command with the
Format=New_Tape qualifier to optimize the backed up .aij file
to tape.
3. Use the DCL BACKUP command to create a copy of the backed up
.aij file as insurance.
If you enter the RMU Optimize After_Journal command with no
Format qualifier, the default is Format=Old_File.
22.4.10 – Group Size
Group_Size=interval
Nogroup_Size
Specifies the frequency at which XOR recovery blocks are written
to tape. The group size can vary from 0 to 100. Specifying a
group size of zero or specifying the Nogroup_Size qualifier
results in no XOR recovery blocks being written. The Group_Size
qualifier is applicable only to tape, and its default value is
device dependent. Oracle RMU returns an error message if this
qualifier is used and the target device is not a tape device.
22.4.11 – Label
Label=(label-name-list)
Specifies the 1- to 6-character string with which the volumes
of the .oaij file are to be labeled. The Label qualifier is
applicable only to tape volumes. You must specify one or more
label names when you use the Label qualifier.
You can specify a list of tape labels for multiple tapes. If you
list multiple tape label names, separate the names with commas,
and enclose the list of names within parentheses.
Use the label that you specify for the RMU Optimize After_Journal
command when you issue the RMU Recover command.
The Label qualifier can be used with indirect file references.
See the Indirect-Command-Files help entry for more information.
22.4.12 – Librarian
Librarian[=options]
Use the Librarian qualifier to back up files to data archiving
software applications that support the Oracle Media Management
interface. The backup file name specified on the command line
identifies the stream of data to be stored in the Librarian
utility. If you supply a device specification or a version number
it will be ignored.
The Librarian qualifier accepts the following options:
o Trace_file=file-specification
The Librarian utility writes trace data to the specified file.
o Level_Trace=n
Use this option as a debugging tool to specify the level of
trace data written by the Librarian utility. You can use a
pre-determined value of 0, 1, or 2, or a higher value defined
by the Librarian utility. The pre-determined values are :
- Level 0 traces all error conditions. This is the default.
- Level 1 traces the entry and exit from each Librarian
function.
- Level 2 traces the entry and exit from each Librarian
function, the value of all function parameters, and the
first 32 bytes of each read/write buffer, in hexadecimal.
o Logical_Names=(logical_name=equivalence-value,...)
You can use this option to specify a list of process logical
names that the Librarian utility can use to specify catalogs
or archives where Oracle Rdb backup files are stored,
Librarian debug logical names, and so on. See the specific
Librarian documentation for the definition of logical names.
The list of process logical names is defined by Oracle RMU
prior to the start of any Oracle RMU command that accesses the
Librarian utility.
The following OpenVMS logical names must be defined for use with
a Librarian utility before you execute an Oracle RMU backup or
restore operation. Do not use the Logical_Names option provided
with the Librarian qualifier to define these logical names.
o RMU$LIBRARIAN_PATH
This logical name must be defined so that the shareable
Librarian image can be loaded and called by Oracle RMU backup
and restore operations. The translation must include the file
type (for example, .exe), and must not include a version
number. The shareable Librarian image must be an installed
(known) image. See the Librarian utility documentation for
the name and location of this image and how it should be
installed.
o RMU$DEBUG_SBT
This logical name is not required. If it is defined, Oracle
RMU will display debug tracing information messages from
modules that make calls to the Librarian shareable image.
You cannot use device specific qualifiers such as Rewind,
Density, or Label with the Librarian qualifier because the
Librarian utility handles the storage meda, not Oracle RMU.
22.4.13 – Log
Log
Nolog
Specifies that the optimization of the .aij file be reported to
SYS$OUTPUT. When optimization activity is logged, the output from
the Log qualifier provides the number of transactions committed
and rolled back. You can specify the Trace qualifier with the
Log qualifier. The default is the setting of the DCL VERIFY flag,
which is controlled by the DCL SET VERIFY command.
22.4.14 – Media Loader
Media_Loader
Nomedia_Loader
Use the Media_Loader qualifier to specify that the tape device
receiving the backup file has a loader or stacker. Use the
Nomedia_Loader qualifier to specify that the tape device does
not have a loader or stacker.
By default, if a tape device has a loader or stacker, Oracle
RMU should recognize this fact. However, occasionally Oracle RMU
does not recognize that a tape device has a loader or stacker.
Therefore, when the first backup tape fills, Oracle RMU issues a
request to the operator for the next tape, instead of requesting
the next tape from the loader or stacker. Similarly, sometimes
Oracle RMU behaves as though a tape device has a loader or
stacker when actually it does not.
If you find that Oracle RMU is not recognizing that your tape
device has a loader or stacker, specify the Media_Loader
qualifier. If you find that Oracle RMU expects a loader or
stacker when it should not, specify the Nomedia_Loader qualifier.
Synonymous with the Owner qualifier. See the description of the
Owner qualifier.
22.4.15 – Owner
Owner=user-id
Specifies the owner of the tape volume set. The owner is the user
who will be permitted to recover (roll forward) the database. The
user-id parameter must be one of the following types of OpenVMS
identifier:
o A user identification code (UIC) in [group-name,member-name]
alphanumeric format
o A UIC in [group-number,member-number] numeric format
o A general identifier, such as SECRETARIES
o A system-defined identifier, such as DIALUP
When used with tapes, the Owner qualifier applies to all
continuation volumes. The Owner qualifier applies to the first
volume only if the Rewind qualifier is also specified. If the
Rewind qualifier is not specified, the optimization operation
appends the file to a previously labeled tape, so the first
volume can have a different protection than the continuation
volumes.
22.4.16 – Protection
Protection[=openvms-file-protection]
Specifies the system file protection for the .oaij file produced
by the RMU Optimize After_Journal command.
The default file protection varies, depending on whether you
write the .oaij file to disk or tape. This is because tapes
do not allow delete or execute access and the SYSTEM account
always has both read and write access to tapes. In addition, a
more restrictive class accumulates the access rights of the less
restrictive classes.
If you do not specify the Protection qualifier, the default
protection is as follows:
o S:RWED,O:RE,G,W if the .oaij file is written to disk
o S:RW,O:R,G,W if the .oaij file is written to tape
If you specify the Protection qualifier explicitly, the
differences in protection applied for backups to tape or disk
as noted in the preceding paragraph are applied. Thus, if you
specify Protection=(S,O,G:W,W:R), that protection on tape becomes
(S:RW,O:RW,G:RW,W:R).
22.4.17 – Recovery Method
Recovery_Method=Sequential
Recovery_Method=Scatter
Specifies how .aij records are to be ordered. You can specify one
of two possible order types:
o Sequential-.aij records are ordered by physical dbkey in an
area:page:line sequence. This order type is the default.
o Scatter-.aij records are ordered by a sort key of
page:area:line (page number, area number, and line number).
This order can allow the RMU Recover command to perform more
effective I/O prefetching and writing to multiple storage
areas simultaneously, typically where storage areas of the
database are distributed among multiple disk devices.
Scatter ordering allows more disk devices to be active during
the recovery process. This helps reduce idle CPU time and allows
the recovery to complete in less time. However, because database
configurations vary widely, Oracle recommends that you perform
tests with both Scatter and Sequential ordering of the optimized
after-image journals to determine which method produces the best
results for your system.
22.4.18 – Rewind
Rewind
Norewind
Specifies that the tape that will contain the .oaij file be
rewound before processing begins. The tape will be initialized
according to the Label qualifier. The Norewind qualifier is
the default and causes the optimized .oaij file to be written
starting at the current logical end-of-tape (EOT).
The Norewind and Rewind qualifiers are applicable only to tape
devices. Oracle RMU returns an error message if these qualifiers
are used and the target device is not a tape device.
22.4.19 – Tape Expiration
Tape_Expiration=date-time
Specifies the expiration date of the .oaij file on tape. Note
that when Oracle RMU reads a tape, it looks at the expiration
date in the file header of the first file on the tape and assumes
the date it finds in that file header is the expiration date
for the entire tape. Therefore, if you are writing an .oaij
file to tape, specifying the Tape_Expiration qualifier only has
meaning if the .oaij file is the first file on the tape. You can
guarantee that the .oaij file will be the first file on the tape
by specifying the Rewind qualifier and overwriting any existing
files on the tape.
When the first file on the tape contains an expiration date
in the file header, you cannot overwrite the tape before the
expiration date unless you have the OpenVMS SYSPRV or BYPASS
privilege.
Similarly, when you attempt to perform a recover operation with
an .oaij file on tape, you cannot perform the recover operation
after the expiration date recorded in the first file on the tape
unless you have the OpenVMS SYSPRV or BYPASS privilege
By default, no expiration date is written to the .oaij file
header. In this case, if the .oaij file is the first file on
the tape, the tape can be overwritten immediately. If the .oaij
file is not the first file on the tape, the ability to overwrite
the tape is determined by the expiration date in the file header
of the first file on the tape.
You cannot explicitly set a tape expiration date for an entire
volume. The volume expiration date is always determined by
the expiration date of the first file on the tape. The Tape_
Expiration qualifier cannot be used with a backup operation to
disk.
22.4.20 – Trace
Trace
Notrace
Specifies that the optimization of the .aij file be traced. The
default is the Notrace qualifier, where optimization is not
traced. When optimization is traced, the output from the Trace
qualifier identifies transactions in the .aij file by transaction
sequence numbers (TSNs) and describes what Oracle RMU did with
each transaction during the optimization process. You can specify
the Log qualifier with the Trace qualifier.
22.5 – Usage Notes
o To use the RMU Optimize After_Journal command for a database,
you must have the RMU$BACKUP or RMU$RESTORE privilege in the
root file access control list (ACL) for the database or the
OpenVMS SYSPRV or BYPASS privilege.
o You cannot optimize an .aij file in the process of backing it
up. You must first back up the .aij file, using the RMU Backup
After_Journal command with the Format=Old_File qualifier, and
then optimize it.
o As part of the optimization process, Oracle RMU sorts journal
records by physical dbkey which improves I/O performance of
the recovery. Because AIJ file optimization uses the OpenVMS
Sort/Merge utility (SORT/MERGE) to sort journal records, you
can improve the efficiency of the sort operation by changing
the number and location of the work files used by SORT/MERGE.
The number of work files is controlled by the RDMS$BIND_SORT_
WORKFILES logical name. The allowable values are 1 through 10
inclusive, with a default value of 2. The location of these
work files can be specified with device specifications, using
the SORTWORKn logical name (where n is a number from 0 to
9). See the OpenVMS documentation set for more information
on using SORT/MERGE. See the Oracle Rdb7 Guide to Database
Performance and Tuning for more information on using these
Oracle Rdb logical names.
o Do not use the OpenVMS Alpha High Performance Sort/Merge
utility (selected by defining the logical name SORTSHR to
SYS$SHARE:HYPERSORT) when using the RMU Optimize After_Journal
command. HYPERSORT does not support several of the interfaces
the command uses. In addition, HYPERSORT does not report
errors or warnings when it is used with the RMU Optimize
After_Journal command.
Make sure that the SORTSHR logical name is not defined to
reference HYPERSORT.EXE.
o You can redirect the AIJ rollforward temporary work files
and the database recovery (DBR) redo temporary work files
to a different disk and directory location than the default
(SYS$DISK) by assigning a different directory to the RDM$BIND_
AIJ_WORK_FILE logical in the LNM$FILE_DEV name table and a
different directory to the RDM$BIND_DBR_WORK_FILE logical in
the LNM$SYSTEM_TABLE, respectively.
This can be helpful in alleviating I/O bottlenecks that might
be occurring in the default location.
o You can optimize an inactive .aij file that results, for
example, from backing up and renaming an extensible .aij file.
Backing up and renaming an extensible .aij file creates a new
active, primary .aij file and makes the previous .aij file
inactive. After optimizing the inactive .aij file, you can
use the OpenVMS BACKUP command to back up the .oaij file. Note
that you cannot optimize an active, primary .aij file.
o The RMU Optimize After_Journal command can read an .aij file
on disk or a backed up .aij file on disk or on tape that is in
the Old_File format, and it can write the .oaij file to disk
or to tape in either Old_File or New_Tape format.
o If an RMU Optimize After_Journal command is issued from a
batch job, tape requests and problems are reported to the
tape operator. This occurs because tape requests and problems
often require manual intervention, and if the RMU Optimize
After_Journal command was issued from a batch job, the only
available person might be the operator.
o When the RMU Optimize After_Journal command is issued
interactively and a tape request or problem arises, Oracle
RMU notifies the person who issued the command through the I/O
channel assigned to the logical name SYS$COMMAND. After being
notified of the problem, the user who issued the command can
either fix the problem (if the user has access to the tape
drive) or contact the tape operator to ask the tape operator
to fix the problem. The REQUEST command can be used to notify
the tape operator, as follows:
$ REQUEST/REPLY/TO=TAPES -
_$ "Please Write Enable tape ATOZBG on drive $255$MUA6:"
o You should use the density values added in OpenVMS Version
7.2-1 for OpenVMS tape device drivers that accept them because
previously supported values may not work as expected. If
previously supported values are specified for drivers that
support the OpenVMS Version 7.2-1 density values, the older
values are translated to the Version 7.2-1 density values if
possible. If the value cannot be translated, a warning message
is generated, and the specified value is used.
If you use density values added in OpenVMS Version 7.2-1 for
tape device drivers that do not support them, the values are
translated to acceptable values if possible. If the value
cannot be translated, a warning message is generated and the
density value is translated to the existing default internal
density value (MT$K_DEFAULT).
One of the following density-related errors is generated if
there is a mismatch between the specified density value and
the values that the tape device driver accepts:
%DBO-E-DENSITY, TAPE_DEVICE:[000000]DATABASE.BCK; does not support
specified density
%DBO-E-POSITERR, error positioning TAPE_DEVICE:
%DBO-E-BADDENSITY, The specified tape density is invalid for
this device
o If you want to use an unsupported density value, use the VMS
INITIALIZE and MOUNT commands to set the tape density. Do not
use the Density qualifier.
o Because data stream names representing the database are
generated based on the backup file name specified for the
Oracle RMU backup command, you must either use a different
backup file name to store the next backup of the database
to the Librarian utility or first delete the existing data
streams generated from the backup file name before the same
backup file name can be reused.
To delete the existing data streams stored in the Librarian
utility, you can use a Librarian management utility or the
Oracle RMU Librarian/Remove command.
22.6 – Examples
Example 1
The following command creates an .oaij file named mf_
personnel.oaij from the .aij file named mf_personnel.aij:
$ RMU/OPTIMIZE/AFTER_JOURNAL MF_PERSONNEL.AIJ MF_PERSONNEL.OAIJ
Example 2
The following example uses a density value with compression:
RMU/OPTIMIZE/AFTER_JOURNAL /DENSITY=(TK89,COMPACTION)/REWIND -
/LABEL=(LABEL1,LABEL2) MF_PERSONNEL.AIJ TAPE1:MF_PERSONNEL.OAIJ, TAPE2:
23 – Populate Cache
Reads one or more tables and indexes from the database and stores
the data rows or index nodes in caches if they exist.
23.1 – Description
The RMU Populate_Cache command allows one or more tables and
indexes to be read from the database and stored in caches if they
exist.
Sorted indexes are read top-down, one index level at a time.
Hashed indexes are read by sequentially scanning the storage
areas containing the hashed indexes and fetching all nodes and
the system record from each database page. Data table rows are
read by sequentially scanning the storage areas containing the
table and fetching all rows of the relation.
23.2 – Format
(B)0[mRMU/Populate_Cache root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Index=index-list x None
/[No]Log x Current DCL verify switch
/[No]Only_Cached x /Only_Cached
/Statistics_Interval=n x /Statistics_Interval=10
/Table=table-list x None
/Transaction_Type=transaction-mode x /Transaction-Type=Automatic
23.3 – Parameters
23.3.1 – root-file-spec
Specifies the database root (.rdb) file. The default file type is
.rdb.
23.4 – Command Qualifiers
23.4.1 – Index
Index=Index-list
Specifies one or more indexes to fetch. All nodes are fetched
from each index. If you list multiple indexes, separate the index
names with a comma and enclose the list within parentheses.
Wildcard characters asterisk (*) and percent sign (%) are
allowed.
23.4.2 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request that information
about the operation be displayed, and the Nolog qualifier to
prevent it. If you specify neither qualifer, the default is the
current setting of the DCL verify switch. (The DCL SET VERIFY
command controls the DCL verify switch.)
23.4.3 – Only Cached
Only_Cached
Noonly_Cached
Specifies whether table or index content is to be read only if
the table or index has an associated row cache. The default is
to read data only from objects that have a cache. If the Noonly_
Cached qualifier is specified, then all data from the specified
tables or indexes is read.
23.4.4 – Statistics Interval
Statistics_Interval=n
Specifies if statistics information is to be periodically
displayed during the populate operation. The default for this
qualifier is an interval of 10 seconds. If you do not use this
qualifier no statistics are displayed.
23.4.5 – Table
Table=table-list
Specifies one or more tables to be processed. All rows are
fetched from each table. If you list multiple tables, separate
the table names with a comma, and enclose the list within
parentheses. Wildcard characters asterisk (*) and percent sign
(%) are allowed.
23.4.6 – Transaction Type
Transaction_Type=option
Allows you to specify the transaction mode for the transactions
used to perform the analyze operation. Valid options are:
o Automatic
o Read_Only
o Noread_Only
You must specify an option if you use this qualifier.
If you do not specify any form of this qualifier, the
Transaction_Type=Automatic qualifier is the default. This
qualifier specifies that Oracle RMU is to determine the
transaction mode.
The Transaction_Type=Read_Only qualifier specifies the
transactions used to perform the analyze operation be set to
read-only mode. When you explicitly set the transaction type to
read-only, snapshots need not be enabled for all storage areas
in the database, but must be enabled for those storage areas
that are read. Otherwise, you receive an error and the analyze
operation fails.
You might select this option if not all storage areas have
snapshots enabled and you are analyzing objects that are stored
only in storage areas with snapshots enabled. In this case, using
the Transaction_Type=Read_Only qualifier allows you to perform
the analyze operation and impose minimal locking on other users
of the database.
The Transaction_Type=Noread_Only qualifier specifies that
the transactions used to for the analyze operation be set to
read/write mode. You might select this option if you want to
eradicate the growth of snapshot files that occurs during a read-
only transaction and are willing to incur the cost of increased
locking that occurs during a read/write transaction.
24 – Reclaim
Allows you to rapidly reclaim deleted dbkeys and locked space
from database pages.
24.1 – Description
Applications that specify the database attach attribute DBKEY
SCOPE IS ATTACH can accumulate locked space and locked dbkeys
within the database. If one user is connected to the database in
DBKEY SCOPE IS ATTACH mode, all users are forced to operate in
this mode, even if they are explicitly connected in TRANSACTION
mode. No dbkeys are reused until the ATTACH session disconnects.
The RMU Reclaim command allows database keys of deleted rows to
be rapidly reset in one or more storage areas. The RMU Reclaim
command reads and updates all pages in a storage area, and, where
possible, releases locked lines and locked free space so that
they are available for later allocation.
24.2 – Format
(B)0[mRMU/Reclaim root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Area[=storage-area-list] x All storage areas
/[No]Log x /NoLog
24.3 – Parameters
24.3.1 – root-file-spec
Specifies the database that contains locked areas or keys to be
reclaimed. The default file extension is .rdb.
24.4 – Command Qualifiers
24.4.1 – Area
Area=storage-area-list
Lists the storage areas to be reclaimed. The default is all
storage areas.
24.4.2 – Log
Log
NoLog
Displays a log message as each storage area is reclaimed. The
default is Nolog.
24.5 – Usage Notes
o The RMU Reclaim command runs on-line and does not require
exclusive access. However, if there are any users connected
to the database in DBKEY SCOPE IS ATTACH mode, the RMU/RECLAIM
operation has greatly reduced effect. In order to release all
possible locked space, there should be no users attached to
the database in DBKEY SCOPE IS ATTACH mode.
o To allow database page locked space to be reclaimed, the
database session that controls the locked space must be
detached from the database. This can be accomplished by
having each attached session disconnect and reconnect to the
database.
25 – Recover
Completes a database reconstruction by processing past
transactions from the after-image journal (.aij) file or
optimized after-image journal (.oaij) file against a database
restored from a backup file.
25.1 – Description
You can use the RMU Recover command to apply the contents of an
.aij file to a restored copy of your database. Oracle RMU rolls
forward the transactions in the .aij file into the restored copy
of the database.
The RMU Recover command accepts a list of .aij or .oaij file
names. Unless you specify the Noautomatic qualifier, the RMU
Recover command attempts to automatically complete the recovery
operation by applying the journals currently associated with
the database in the current journal configuration if they are in
the recovery sequence. For example, if you specify the following
RMU Recover command, Oracle RMU not only recovers AIJ1, but also
AIJ2, AIJ3, and so on, for all journals in the recovery sequence:
$ RMU/RECOVER AIJ1
However, note that this automatic recovery feature means that
if you want to specify a termination condition, you must specify
the Until qualifier. Example 1 demonstrates how to specify a
termination condition with the Until qualifier.
If you are using extensible journals, you can also use the RMU
Backup After_Journal command to copy your database's .aij file to
tape, and truncate the original .aij file without shutting down
your database.
If you have backed up your .aij files (using the RMU Backup
After_Journal command), these .aij files are no longer part of
the current journal configuration and automatic recovery does
not take place because Oracle RMU does not know where to find
the .aij files. (There is one exception to this rule: if the only
.aij file that has been backed up is the first .aij file in the
recovery sequence, then automatic recovery occurs. You specify
the backed up .aij file on the Oracle RMU command line and Oracle
RMU can determine where the remaining on-disk .aij files reside.)
When automatic recover does not, or cannot occur, you must
specify the complete list of .aij files on the RMU Recover
command line to return your database to the desired state.
If your backup files were created using the Noquiet_Point
qualifier, you must provide the names of all the .aij files
in just one command. In addition, you must be careful to apply
these .aij files to the database in the order in which they
were created. Oracle RMU checks the validity of the journal
file entries against your database and applies only appropriate
transactions. If none of the transactions apply, you will receive
a warning message.
You can access your database for retrieval of data between
recovery steps, but you must not perform additional updates if
you want to perform more recovery steps.
If a system failure causes a recovery step to abort, you can
simply issue the RMU Recover command again. Oracle RMU scans
the .aij file until it finds the first transaction that has not
yet been applied to your restored database. Oracle RMU begins
recovery at that point.
25.2 – Format
(B)0[mRMU/Recover aij-file-name-list
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Active_IO=max-reads x /Active_IO=3
/Aij_Buffers=integer x /Aij_Buffers=20
/Areas [= storage-area[,...] ] x All storage areas
/[No]Automatic x /Automatic
/[No]Confirm[=options] x See description
/Encrypt=({Value=|Name=}[,Algorithm=]) x See description
/Format={Old_File|New_Tape} x /Format=Old_File
/Just_Corrupt x See description
/Label=(label-name-list) x See description
/Librarian[=options] x None
/[No]Log x Current DCL verify value
/[No]Media_Loader x See description
/[No]Online x /Noonline
/Order_Aij_Files x See description
/Output=file-name x See description
/Prompt={Automatic|Operator|Client} x See description
(B)0[m/Resolve x See description
/[No]Rewind x /Norewind
/Root=root-file-name x See description
/[No]Trace x See Description
/Until=date-time x Current time
25.3 – Parameters
25.3.1 – aij-file-name-list
A list of after-image journal (.aij) files to be applied to the
database. You can supply this list using one of the following
methods:
o List the .aij files on the command line in the order in which
they were created. In other words, the oldest .aij file must
be the first in the list.
o Use an asterisk (*) or percent sign (%) to represent the .aij
files. The .aij files are processed in the order that they are
presented by OpenVMS.
o Append all your .aij files into one file and supply a single
.aij file name. You must be certain when you append the files
that you append them in the order in which they were created.
o Use an indirect command file. Include an .aij file name on
each line of the command file. If the number of .aij files
needed for recovery is large, listing each one on the command
line can exceed the maximum allowed command-line length. You
can avoid this problem by using an indirect command file. See
the Indirect-Command-Files help entry for more information on
indirect command files.
25.4 – Command Qualifiers
25.4.1 – Active IO
Active_IO=max-reads
Specifies the maximum number of read operations from a backup
device that the RMU Recover command attempts simultaneously. This
is not the maximum number of read operations in progress; that
value is a function of active system I/O operations.
The value of the
Active_IO qualifier can range from 1 to 5. The default value
is 3. Values larger than 3 can improve performance with some tape
drives.
25.4.2 – Aij Buffers
Aij_Buffers=integer
Specifies the
number of buffers to be used by the recovery process. The default
is 20 buffers. The valid range is 2 to 1,048,576 buffers. If the
database root file is available, you can use the RMU Dump After_
Journal command with the Option=Statistics qualifier to find a
recommended value for this qualifier. See Dump After_journal for
details.
25.4.3 – Areas
Areas[=storage-area[,...]]
Specifies the areas you want to recover. You can specify each
storage area by name or by the area's ID number.
You should use the Areas qualifier only if you have inconsistent
storage areas to recover. The default for the Areas qualifier
is all storage areas that must be recovered to make the database
consistent.
If the Areas qualifier is specified, a recovery operation by area
recovers until the storage areas being rolled forward are current
with the other storage areas, then recovery stops, regardless of
the time specified by the Until qualifier.
When the Areas qualifier is not specified or the Areas=*
qualifier is specified, Oracle RMU recovers all the storage areas
in the database to the time specified by the Until qualifier
or to the time of the last committed transaction in the .aij
file that can be applied. When the Areas qualifier is specified
without a value, Oracle RMU recovers to the earliest consistent
state only those storage areas that are not current with the
database root (.rdb) file of the database.
The Areas qualifier works in the following manner:
o If the Areas qualifier is specified without a value, Oracle
RMU automatically determines what areas are inconsistent
and recovers those areas. If an inconsistent area cannot
be recovered because it is at a higher transaction sequence
number (TSN) value than the database root file, the entire
database is recovered even if the Areas qualifier was
specified.
See the Oracle Rdb Guide to Database Maintenance for
information on TSNs.
o If the Areas qualifier is omitted or the Areas qualifier is
specified as Areas=*, the entire database (all storage areas)
is recovered.
o If the Areas qualifier is specified as Areas=(A1,A2,A3), only
areas A1, A2, and A3 are recovered until they are consistent.
If one of these areas is already consistent, or if an area is
at a higher TSN value than the database root file, the entire
database is recovered.
o If the Online qualifier is specified with the Areas qualifier
(as in the first three list items) and the end result is that
the entire database must be recovered, an error message is
generated because you can recover only individual areas by
using the Online qualifier, not the entire database.
You cannot use the Areas qualifier with the Just_Corrupt
qualifier because the Areas qualifier implies recovery for all
named areas and pages in those areas. (That is, use of the Just_
Corrupt qualifier with the Areas qualifier is redundant.)
The Areas qualifier can be used with indirect file references.
See the Indirect-Command-Files help entry for more information.
25.4.4 – Automatic
Automatic
Noautomatic
Specifies whether or not Oracle RMU should attempt automatic
recovery of .aij files. If you specify the Noautomatic qualifier,
only the .aij file or files you list on the Oracle RMU command
line are applied. If you specify the Automatic qualifier, Oracle
RMU attempts to recover all the .aij files currently associated
with the database.
The Automatic qualifier is the default; Oracle RMU attempts to
recover all the .aij files currently associated with the database
unless the .aij files have been backed up.
See the description section for more information on how automatic
recovery works.
25.4.5 – Confirm
Confirm[=options]
Noconfirm
Specifies whether or not the RMU /RECOVER command causes the
operator to be queried when an incorrect sequence of AIJ files is
detected.
The default for interactive recoveries is /CONFIRM, which
prompts the user to see if he wants to continue. The default
for RMU/RECOVER/NOCONFIRM and RMU/RECOVER executed in batch
jobs is to terminate the RMU/RECOVER at the point where
the out of sequence AIJ file is detected (equivalent to
RMU/RECOVER/CONFIRM=ABORT).
To override the default behavior, the user can continue to roll
forward and ignore the missing AIJ file either by specifying the
command syntax RMU/RECOVER/CONFIRM to get a prompt on whether to
continue rolling forward if there is an AIJ sequence gap, or by
specifying the syntax RMU/CONFIRM=CONTINUE if he does not want
the prompt or is executing the RMU/RECOVER in a batch job.
NOTE
Oracle recommends that, in general, an incorrect journal
sequence not be applied as a corrupt database may result.
The /Order_Aij_Files qualifier can be used to help ensure that
the specified journals are applied in the correct order.
The Confirm qualifier accepts the following options:
o CONFIRM=CONTINUE
Do not prompt the user if a sequence gap is detected on the
next AIJ file to be rolled forward but ignore the missing AIJ
file and continue rolling forward.
o CONFIRM=ABORT
Do not prompt the user if a sequence gap is detected on
the next AIJ roll forward but end the database recover at
this point. This is the same as the default behavior for
RMU/RECOVER/NOCONFIRM and RMU/RECOVER in batch.
25.4.6 – Encrypt
Encrypt=({Value=|Name=}[,Algorithm=])
The Encrypt qualifier is used to recover the database from an
encrypted after image journal backup file.
Specify a key value as a string or, the name of a predefined
key. If no algorithm name is specified the default is DESCBC.
For details on the Value, Name and Algorithm parameters see HELP
ENCRYPT.
This feature requires the OpenVMS Encrypt product to be installed
and licensed on this system.
This feature only works for a newer format backup file which has
been created using the Format=New_Tape qualifier. Therefore you
have to specify the Format=New_Tape qualifier with this command
if you also use the Encrypt qualifier.
Synonymous with the Format=Old_File and Format=New_Tape
qualifiers. See the description of those qualifiers.
25.4.7 – Format
Format=Old_File
Format=New_Tape
Specifies whether the backed up or optimized .aij file was
written in the old (disk-optimized) or the new (tape-optimized)
format. The Format=Old_File qualifier is the default. You must
specify the same Format qualifier that was used with the RMU
Backup After_Journal command or the RMU Optimize After_Journal
command. If your .aij file resides on disk, you should use the
Format=Old_File qualifier.
If you specified the Format=Old_File qualifier when you optimized
or backed up the .aij file to tape, you must mount the backup
media by using the DCL MOUNT command before you issue the RMU
Recover command. Because the RMU Recover command will use RMS
to read the tape, the tape must be mounted as an OpenVMS volume
(that is, do not specify the /FOREIGN qualifier with the MOUNT
command).
If you specify the Format=New_Tape qualifier, you must mount the
backup media by using the DCL MOUNT /FOREIGN command before you
issue the RMU Recover command.
Similarly, if you specify OpenVMS access (you do not specify the
/FOREIGN qualifier on the DCL MOUNT command) although your .aij
backup was created using the Format=New_Tape qualifier, you will
receive an RMU-F-MOUNTFOR error.
The following tape qualifiers have meaning only when used in
conjunction with the Format=New_Tape qualifier:
Active_IO
Label
Rewind
25.4.8 – Just Corrupt
Just_Corrupt
Specifies that only inconsistent pages in the corrupt page table
(CPT) and areas marked as inconsistent should be recovered. You
can use this qualifier while users are attached to the database.
You can use the Just_Corrupt qualifier with the Until qualifier
to limit the recovery period to a particular point in time.
You cannot use the Areas qualifier with the Just_Corrupt
qualifier because the Areas qualifier implies recovery for all
named areas and pages in those areas. (That is, use of the Just_
Corrupt qualifier with the Areas qualifier is redundant.)
If you do not specify the Just_Corrupt qualifier, all pages are
recovered.
25.4.9 – Just Pages
Just_Pages
This qualifier is replaced with the Just_Corrupt qualifier
beginning in Oracle Rdb V7.0. See the description of the Just_
Corrupt qualifier.
Specifies the 1- to 6-character string with which the volumes of
the backup
25.4.10 – Label
Label=(label-name-list)
Specifies the 1- to 6-character string with which the volumes
of the backup file have been labeled. The Label qualifier is
applicable only to tape volumes. You must specify one or more
label names when you use the Label qualifier.
You can specify a list of tape labels for multiple tapes. If you
list multiple tape label names, separate the names with commas,
and enclose the list of names within parentheses.
In a normal recovery operation, the Label qualifier you specify
with the RMU Recover command should be the same Label qualifier
you specified with the RMU Backup After_Journal command to back
up your .aij files.
The Label qualifier can be used with indirect file references.
See the Indirect-Command-Files help entry for more information.
25.4.11 – Librarian
Librarian=options
Use the Librarian qualifier to restore files from data archiving
software applications that support the Oracle Media Management
interface. The file name specified on the command line identifies
the stream of data to be retrieved from the Librarian utility. If
you supply a device specification or a version number it will be
ignored.
Oracle RMU supports retrieval using the Librarian qualifier only
for data that has been previously stored by Oracle RMU using the
Librarian qualifer.
The Librarian qualifier accepts the following options:
o Trace_file=file-specification
The Librarian utility writes trace data to the specified file.
o Level_Trace=n
Use this option as a debugging tool to specify the level of
trace data written by the Librarian utility. You can use a
pre-determined value of 0, 1, or 2, or a higher value defined
by the Librarian utility. The pre-determined values are :
- Level 0 traces all error conditions. This is the default.
- Level 1 traces the entry and exit from each Librarian
function.
- Level 2 traces the entry and exit from each Librarian
function, the value of all function parameters, and the
first 32 bytes of each read/write buffer, in hexadecimal.
o Logical_Names=(logical_name=equivalence-value,...)
You can use this option to specify a list of process logical
names that the Librarian utility can use to specify catalogs
or archives where Oracle Rdb backup files are stored,
Librarian debug logical names, and so on. See the specific
Librarian documentation for the definition of logical names.
The list of process logical names is defined by Oracle RMU
prior to the start of any Oracle RMU command that accesses the
Librarian application.
The following OpenVMS logical names must be defined for use with
a Librarian utility before you execute an Oracle RMU backup or
restore operation. Do not use the Logical_Names option provided
with the Librarian qualifier to define these logical names.
o RMU$LIBRARIAN_PATH
This logical name must be defined so that the shareable
Librarian image can be loaded and called by Oracle RMU backup
and restore operations. The translation must include the file
type (for example, .exe), and must not include a version
number. The shareable Librarian image must be an installed
(known) image. See the Librarian utility documentation for
the name and location of this image and how it should be
installed.
o RMU$DEBUG_SBT
This logical name is not required. If it is defined, Oracle
RMU will display debug tracing information messages from
modules that make calls to the Librarian shareable image.
You cannot use device specific qualifiers such as Rewind,
Density, or Label with the Librarian qualifier because the
Librarian utility handles the storage meda, not Oracle RMU.
25.4.12 – Log
Log
Nolog
Specifies that the recovery activity be logged. The default is
the setting of the DCL VERIFY flag, which is controlled by the
DCL SET VERIFY command. When recovery activity is logged, the
output from the Log qualifier provides the number of transactions
committed, rolled back, and ignored during the recovery process.
You can specify the Trace qualifier with the Log qualifier.
25.4.13 – Media Loader
Media_Loader
Nomedia_Loader
Use the Media_Loader qualifier to specify that the tape device
from which the .aij file is being read has a loader or stacker.
Use the Nomedia_Loader qualifier to specify that the tape device
does not have a loader or stacker.
By default, if a tape device has a loader or stacker, Oracle
RMU should recognize this fact. However, occasionally Oracle RMU
does not recognize that a tape device has a loader or stacker.
Therefore, when the first tape has been read, Oracle RMU issues a
request to the operator for the next tape, instead of requesting
the next tape from the loader or stacker. Similarly, sometimes
Oracle RMU behaves as though a tape device has a loader or
stacker when actually it does not.
If you find that Oracle RMU is not recognizing that your tape
device has a loader or stacker, specify the Media_Loader
qualifier. If you find that Oracle RMU expects a loader or
stacker when it should not, specify the Nomedia_Loader qualifier.
25.4.14 – Online
Online
Noonline
Specifies that the recover operation be performed while other
users are attached to the database. The Online qualifier can only
be used with the Area or Just_Corrupt qualifier. The areas or
pages to be recovered are locked for exclusive access, so the
operation is not compatible with other uses of the data in the
areas or on the pages specified.
The default is the Noonline qualifier.
25.4.15 – Order Aij Files
Specifies that the input after-image journal files are to
be processed in ascending order by sequence number. The .aij
files are each opened, the first block is read to determine the
sequence number, and the files are closed prior to the sequence
number sorting operation. The Order_Aij_Files can be especially
useful if you use wildcards to specify .aij files.
The Order_Aij_Files qualifier can also eliminate some .aij files
from processing if they are known to be prior to the database
recovery sequence starting point.
Note that due to the fact that the .aij backup files might have
more than one journal sequence in them, it is not always possible
for RMU to eliminate every journal file that might otherwise
appear to be unneeded. But for those journals where RMU is able
to know for certain that the journal will not be needed based
on the database recovery restart information, journals can be
avoided from having to be processed.
25.4.16 – Output
Output=file-name
Redirects the log and trace output (selected with the Log and
Trace qualifiers) to the named file. If this qualifier is not
specified, the output generated by the Log and Trace qualifiers,
which can be voluminous, is displayed on your terminal.
25.4.17 – Prompt
Prompt=Automatic
Prompt=Operator
Prompt=Client
Specifies where server prompts are to be sent. When you specify
Prompt=Automatic, prompts are sent to the standard input device,
and when you specify Prompt=Operator, prompts are sent to the
server console. When you specify Prompt=Client, prompts are sent
to the client system.
25.4.18 – Resolve
Resolve
Recovers a corrupted database and resolves an unresolved
transaction by completing the transaction.
See the help entry for the RMU Recover Resolve command for a
description of the options available with the Resolve qualifier.
25.4.19 – Rewind
Rewind
Norewind
Specifies that the tape that contains the backup file be rewound
before processing begins. The tape is searched for the backup
file starting at the beginning-of-tape (BOT). The Norewind
qualifier is the default and causes the backup file to be
searched starting at the current tape position.
The Rewind and Norewind qualifiers are applicable only to tape
devices. Oracle RMU returns an error message if these qualifiers
are used and the target device is not a tape device.
25.4.20 – Root
Root=root-file-name
Specifies the name of the database to which the journal should
be applied. The Root qualifier allows you to specify a copy of a
database instead of the original whose file specification is in
the .aij file. Use the Root qualifier to specify the new location
of your restored database root (.rdb) file.
Specifying this qualifier lets you roll forward a database copy
(possibly residing on a different disk) by following these steps:
1. Use the RMU Backup command to make a backup copy of the
database:
$ RMU/BACKUP MF_PERSONNEL.RDB MF_PERS_FULL_BU.RBF
This command writes a backup file of the database mf_personnel
to the file mf_pers_full_bu.rbf.
2. Use the RMU Restore command with the Root and Directory
qualifiers, stating the file specifications of the database
root and storage area files in the database copy.
$ RMU/RESTORE/ROOT=DB3:[USER]MF_PERSONNEL/DIRECTORY=DB3:[USER] -
_$ MF_PERS_FULL_BU
This command restores the database on disk DB3: in the
directory [USER]. Default file names and file extensions are
used.
3. If the database uses after-image journaling, you can use the
RMU Recover command to roll forward the copy.
$ RMU/RECOVER DBJNL.AIJ/ROOT=DB3:[USER]MF_PERSONNEL.RDB
Thus, transactions processed and journaled since the backup
operation are recovered on the copy on the DB3: disk.
Correct operation of this procedure requires that there are no
write transactions for the restored copy between the restore and
recover steps.
If you do not specify the Root qualifier, Oracle RMU examines
the .aij file to determine the exact name of the database root
(.rdb) file to which the journaled transactions will be applied.
This name, which was stored in the .aij file, is the full file
specification that your .rdb file had when after-image journaling
was enabled.
The journal file for a single-file database does not include the
file name for the database; to recover a single-file database,
you must specify the location of the database to be recovered by
using the Root qualifier.
25.4.21 – Trace
Trace
Notrace
Specifies that the recovery activity be logged. The default is
the setting of the DCL VERIFY flag, which is controlled by the
DCL SET VERIFY command. When recovery activity is logged, the
output from the Trace qualifier identifies transactions in the
.aij file by TSN and describes what Oracle RMU did with each
transaction during the recovery process. You can specify the Log
qualifier with the Trace qualifier.
25.4.22 – Until
Until=date-time
Use the Until qualifier to limit the recovery to those
transactions in the journal file bearing a starting timestamp no
later than the specified time. For example, suppose your database
fails today, but you have reason to believe that something
started to go wrong at noon yesterday. You might decide that you
only want to restore the database to the state it was in as of
noon yesterday. You could use the Until qualifier to specify that
you only want to recover those transactions that have a timestamp
of noon yesterday or earlier.
If you do not specify the Until qualifier, all committed
transactions in the .aij file will be applied to your database.
If you specify the Until qualifier, but do not specify a date-
time, the current time is the default.
If the Until qualifier is specified with a recover-by-area
operation, the operation terminates when either the specified
time is reached in the transaction sequence or the specified
storage areas become consistent with the other storage areas;
whichever condition occurs first.
25.5 – Usage Notes
o To use the RMU Recover command for a database, you must have
the RMU$RESTORE privilege in the root file access control
list (ACL) for the database or the OpenVMS SYSPRV or BYPASS
privilege.
o You can use the RMU Backup After_Journal command to copy an
extensible .aij file to tape and truncate the original .aij
file without shutting down your database.
o Transactions are applied to the restored copy of your database
in the order indicated by their commit sequence number and the
commit record in the .aij file; timestamps are not used for
this purpose. Therefore, you need not be concerned over time
changes made to the system (for example, resetting the time
for United States daylight saving time) or inconsistencies
in the system time on different nodes in a cluster. The only
occasion when timestamps are considered in the application of
.aij files is when you specify the Until qualifier. In this
case, the timestamp is used only as the point at which to stop
the recovery, not as a means to serialize the order in which
transactions are applied. See the description of the Until
qualifier for more information.
o You can redirect the AIJ rollforward temporary work files
and the database recovery (DBR) redo temporary work files
to a different disk and directory location than the default
(SYS$DISK) by assigning a different directory to the RDM$BIND_
AIJ_WORK_FILE logical in the LNM$FILE_DEV name table and a
different directory to the RDM$BIND_DBR_WORK_FILE logical in
the LNM$SYSTEM_TABLE, respectively.
This can be helpful in alleviating I/O bottlenecks that might
be occurring in the default location.
o In a normal recovery operation, the Format and Label
qualifiers you specify with the RMU Recover command should
be the same Format and Label qualifiers you specified with the
RMU Backup After_Journal command to back up your .aij files or
with the RMU Optimize After_Journal command to optimize your
.aij files.
For more information on the type of access to specify when
mounting tapes, see the description of the Format=Old_File and
Format=New_Tape qualifiers in the Format section.
o The following restrictions apply to using optimized .aij files
with recovery operations:
- Optimized .aij files cannot be used as part of by-area
recovery operations (recovery operations that use the RMU
Recover command with the Area qualifier).
- Optimized .aij files cannot be used as part of by-page
recovery operations (recovery operations that use the RMU
Recover command with the Just_Corrupt qualifier).
- Optimized .aij files cannot be specified for an RMU Recover
command that includes the Until qualifier. The optimized
.aij file does not retain enough of the information from
the original .aij file for such an operation.
- Optimized .aij files cannot be used with a recovery
operation if the database has been modified since the .aij
file was optimized.
The workaround for these restrictions against using optimized
.aij files in recovery operations is to use the original,
unoptimized .aij file in the recovery operation instead.
o You can read your database between recovery steps, but you
must not perform additional updates if you want to perform
more recovery steps.
o If a system failure causes a recovery step to abort, you can
simply issue the RMU Recover command again. Oracle RMU scans
the .aij file until it finds the first transaction that has
not yet been applied to your restored database. Oracle RMU
begins recovery at that point.
o You can use the RMU Recover command to apply the contents of
an .aij file to a restored copy of your database. Oracle RMU
will roll forward the transactions in the .aij file into the
restored copy of the database. You can use this feature to
maintain an up-to-date copy of your database for fast recovery
after a failure. To do this, use the RMU Recover command to
periodically apply your .aij files to a separate copy of the
database.
When you employ this procedure for fast recovery, you must
be absolutely certain that no one will execute an update
transaction on the database copy. Should someone execute an
update transaction, it might result in the inability to apply
the .aij files correctly.
o See the Oracle Rdb Guide to Database Maintenance for
information on the steps Oracle RMU follows in tape label
checking.
o When you use an optimized after-image journal for recovery,
the optimal number of buffers specified with the Aij_Buffers
qualifier depends on the number of active storage areas
being recovered. For those journals optimized with Recover_
Method=Sequential, a buffer count of 250 to 500 is usually
sufficient.
When you use journals optimized with Recover_Method=Scatter,
reasonable performance can usually be attained with a buffer
count of about five times the number of active storage areas
being recovered (with a minimum of about 250 to 500 buffers).
o The number of asynchronous prefetch (APF) buffers is also a
performance factor during recovery. For recovery operations
of optimized after-image journals, the RMU Recover command
sets the number of APF buffers (also known as the APF depth)
based on the values of the process quotas ASTLM, BYTLM, and
the specified AIJ_Buffers value. The APF depth is set to the
maximum of:
- 50% of the ASTLM process quota
- 50% of the DIOLM process quota
- 25% of the specified AIJ_Buffers value
The accounts and processes that perform RMU Recover operations
should be reviewed to ensure that various quotas are set to
ensure high levels of I/O performance. The following table
lists suggested quota values for recovery performance.
Quota Setting
DIOLM Equal to or greater than half of the count of
database buffers specified by the AIJ_Buffers
qualifier. Miminum of 250.
BIOLM Equal to or greater than the setting of DIOLM.
ASTLM Equal to or greater than 50 more than the setting of
DIOLM.
BYTLM Equal to or greater than 512 times the database
buffer size times one half the value of database
buffers specified by the AIJ_Buffers qualifier.
Based on a 12-block buffer size and the desire
to have 100 asynchronous I/O requests outstanding
(either reading or writing), the minimum suggested
value is 614,400 for a buffer count of 200.
WSQUOTA Large enough to avoid excessive page faulting.
WSEXTENT
FILLM 50 more than the count of database storage areas and
snapshot storage areas.
25.6 – Examples
Example 1
In the following example, the RMU Recover command requests
recovery from the .aij file personnel.aij located on PR$DISK in
the SMITH directory. It specifies that recovery should continue
until 1:30 P.M. on May 7, 1996. Because the Trace qualifier is
specified, the RMU Recover command displays detailed information
about the recovery operation to SYS$OUTPUT.
$ RMU/RECOVER/UNTIL="07-MAY-1996 13:30"/TRACE PR$DISK:[SMITH]PERSONNEL
%RMU-I-LOGRECDB, recovering database file DISK1:[DB.70]MF_PERSONNEL.RDB;1
%RMU-I-LOGRECSTAT, transaction with TSN 0:256 committed
%RMU-I-AIJONEDONE, AIJ file sequence 0 roll-forward operations completed
%RMU-I-AIJAUTOREC, starting automatic after-image journal recovery
%RMU-I-AIJONEDONE, AIJ file sequence 1 roll-forward operations completed
%RMU-W-NOTRANAPP, no transactions in this journal were applied
%RMU-I-AIJALLDONE, after-image journal roll-forward operations completed
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJFNLSEQ, to start another AIJ file recovery, the sequence number
needed will be 1
Example 2
The following example shows how to use .aij files to recover a
database:
SQL> CREATE DATABASE FILENAME DISK1:[SAMPLE]TEST_DB
cont> RESERVE 5 JOURNALS;
SQL> --
SQL> -- Use the DISCONNECT ALL statement to detach from the database,
SQL> -- then issue the ALTER DATABASE statement that automatically
SQL> -- invokes the specified database.
SQL> --
SQL> DISCONNECT ALL;
SQL> --
SQL> -- Create after-image journaling. The .aij files are given the
SQL> -- names aij_one.aij and aij_two.aij (and are placed on a disk
SQL> -- other than the disk holding the .rdb and .snp files):
SQL> --
SQL> ALTER DATABASE FILENAME DISK1:[SAMPLE]TEST_DB
cont> JOURNAL IS ENABLED
cont> ADD JOURNAL AIJ_ONE
cont> FILENAME 'USER$DISK:[CORP]AIJ_ONE'
cont> BACKUP FILENAME 'USER$DISK2:[CORP]AIJ_ONE'
cont> ADD JOURNAL AIJ_TWO
cont> FILENAME 'USER$DISK3:[CORP]AIJ_TWO'
cont> BACKUP FILENAME 'USER$DISK4:[CORP]AIJ_TWO';
SQL> EXIT
$ !
$ ! Using the RMU Backup command, make a backup copy of the database.
$ ! This command ensures that you have a copy of the
$ ! database at a known time, in a known state.
$ !
$ RMU/BACKUP DISK1:[SAMPLE]TEST_DB USER2:[BACKUPS]TEST_BACKUP.RBF
$ !
$ ! Now you can use SQL with after-image journaling enabled.
$ !
$ SQL
SQL> --
SQL> -- Attach to the database and perform some data definition
SQL> -- and storage.
SQL> --
SQL> ATTACH 'FILENAME DISK1:[SAMPLE]TEST_DB';
SQL> CREATE TABLE TABLE1 (NEW_COLUMN CHAR(10));
SQL> INSERT INTO TABLE1 (NEW_COLUMN) VALUES ('data');
SQL> COMMIT;
SQL> EXIT
$ !
$ ! Imagine that a disk failure occurred here. In such a situation,
$ ! the current database is inaccessible. You need a prior copy
$ ! of the database to roll forward all the transactions in the
$ ! .aij file.
$ !
$ !
$ ! You know that the backup file of the database is
$ ! uncorrupted. Use the RMU Restore command to restore and recover
$ ! the database. You do not have to issue the RMU Recover command
$ ! because the RMU Restore command will automatically recover the
$ ! database.
$ !
$ RMU/RESTORE/NOCDD_INTEGRATE/DIR=DDV21:[TEST] -
_$ USER2:[BACKUPS]TEST_BACKUP.RBF
%RMU-I-AIJRSTAVL, 2 after-image journals available for use
%RMU-I-AIJRSTMOD, 1 after-image journal marked as "modified"
%RMU-I-AIJISON, after-image journaling has been enabled
%RMU-W-DOFULLBCK, full database backup should be done to ensure
future recovery
%RMU-I-LOGRECDB, recovering database file DDV21:[TEST]TEST_DB.RDB;1
%RMU-I-AIJAUTOREC, starting automatic after-image journal recovery
%RMU-I-AIJONEDONE, AIJ file sequence 0 roll-forward operations completed
%RMU-I-AIJONEDONE, AIJ file sequence 1 roll-forward operations completed
%RMU-W-NOTRANAPP, no transactions in this journal were applied
%RMU-I-AIJALLDONE, after-image journal roll-forward operations completed
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJFNLSEQ, to start another AIJ file recovery, the sequence
number needed will be 1
Example 3
The following example demonstrates how the recovery operation
works when there are .aij backup files to be applied. First you
must restore the database by using the RMU Restore command with
the Norecovery qualifier, then apply the backed up .aij file
by using the RMU Recover command. Oracle RMU will complete the
recovery with the .aij files that were current when the restore
operation was invoked. This example assumes that three .aij files
have been added to the mf_personnel database prior to the first
shown backup operation and that journaling is enabled.
$ ! Create a backup file of the complete and full database.
$ !
$ RMU/BACKUP MF_PERSONNEL DISK1:[BACKUPS]MF_PERSONNEL_BCK.RBF
$ !
$ ! Updates are made to the SALARY_HISTORY and DEPARTMENTS tables.
$ !
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> UPDATE SALARY_HISTORY
cont> SET SALARY_END='20-JUL-1993 00:00:00.00'
cont> WHERE SALARY_START='14-JAN-1983 00:00:00'
cont> AND EMPLOYEE_ID='00164';
SQL> INSERT INTO DEPARTMENTS
cont> (DEPARTMENT_CODE, DEPARTMENT_NAME,
cont> MANAGER_ID,BUDGET_PROJECTED, BUDGET_ACTUAL)
cont> VALUES ('WLNS', 'WELLNESS CENTER', '00188',0,0);
SQL> COMMIT;
SQL> DISCONNECT DEFAULT;
SQL> EXIT
$ !
$ ! Create a backup file of the .aij files.
$ !
$ RMU/BACKUP/AFTER_JOURNAL MF_PERSONNEL DISK2:[BACKUP]MF_PERS_AIJBCK
$ !
$ ! An additional update is made to the DEPARTMENTS table.
$ !
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> INSERT INTO DEPARTMENTS
cont> (DEPARTMENT_CODE, DEPARTMENT_NAME, MANAGER_ID,BUDGET_PROJECTED,
cont> BUDGET_ACTUAL)
cont> VALUES ('facl', 'FACILITIES', '00190',0,0);
SQL> COMMIT;
SQL> DISCONNECT DEFAULT;
SQL> EXIT;
$
$ ! Assume the disk holding the SALARY_HISTORY and DEPARTMENTS
$ ! storage areas is lost. Restore only those areas. Specify
$ ! the Norecovery qualifier since you will need to apply the
$ ! .aij backup file.
$
$ RMU/RESTORE/AREA DISK1:[BACKUPS]MF_PERSONNEL_BCK.RBF -
_$ SALARY_HISTORY, DEPARTMENTS/NORECOVER
$ !
$ ! Now recover the database. Although you only specify the .aij
$ ! backup file, Oracle RMU will automatically continue the
$ ! recovery with the current journals in the recovery sequence after
$ ! the backed up .aij files have been applied.
$ !
$ RMU/RECOVER/LOG DISK2:[BACKUP]MF_PERS_AIJBCK
%RMU-I-AIJBADAREA, inconsistent storage area DISK3:[STO_AREA]
DEPARTMENTS.RDA;1 needs AIJ sequence number 0
%RMU-I-AIJBADAREA, inconsistent storage area
DISK3:[STO_AREA]SALARY_HISTORY.RDA;1 needs AIJ sequence number 0
%RMU-I-LOGRECDB, recovering database file
DISK3:[DATABASE]MF_PERSONNEL.RDB;1
%RMU-I-LOGOPNAIJ, opened journal file
DISK2:[BACKUP]MF_PERS_AIJBCK.AIJ;1
%RMU-I-AIJONEDONE, AIJ file sequence 0 roll-forward operations
completed
%RMU-I-LOGRECOVR, 3 transactions committed
%RMU-I-LOGRECOVR, 0 transactions rolled back
%RMU-I-LOGRECOVR, 0 transactions ignored
%RMU-I-AIJNOACTIVE, there are no active transactions
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJNXTSEQ, to continue this AIJ file recovery, the sequence
number needed will be 1
%RMU-I-AIJAUTOREC, starting automatic after-image journal recovery
%RMU-I-LOGOPNAIJ, opened journal file DISK4:[CORP]AIJ_TWO.AIJ;1
%RMU-I-AIJONEDONE, AIJ file sequence 1 roll-forward operations
completed
%RMU-I-LOGRECOVR, 2 transactions committed
%RMU-I-LOGRECOVR, 0 transactions rolled back
%RMU-I-LOGRECOVR, 0 transactions ignored
%RMU-I-AIJNOACTIVE, there are no active transactions
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJNXTSEQ, to continue this AIJ file recovery, the sequence
number needed will be 2
%RMU-I-AIJALLDONE, after-image journal roll-forward operations
completed
%RMU-I-LOGSUMMARY, total 5 transactions committed
%RMU-I-LOGSUMMARY, total 0 transactions rolled back
%RMU-I-LOGSUMMARY, total 0 transactions ignored
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJGOODAREA, storage area DISK3:[STO_AREA]DEPARTMENTS.RDA;1
is now consistent
%RMU-I-AIJGOODAREA, storage area DISK3:[STO_AREA]SALARY_HISTORY.RDA;1
is now consistent
%RMU-I-AIJFNLSEQ, to start another AIJ file recovery, the sequence
number needed will be 2
$ !
$ ! Database is restored and recovered and ready to use.
$ !
Example 4
The following example demonstrates how to recover all the known
inconsistent pages in a database. Assume the RMU Show Corrupt_
Pages command reveals that page 60 in the EMPIDS_LOW storage
area is inconsistent and pages 11 and 123 in the EMPIDS_MID
storage area is inconsistent. The RMU Recover command is issued
to recover on line all pages logged inconsistent in the corrupt
page table (CPT). After the recovery operation, the CPT will be
empty.
$ RMU/RECOVER/JUST_CORRUPT/ONLINE/LOG MF_PERSONNEL.AIJ
%RMU-I-AIJBADPAGE, inconsistent page 11 from storage area
DISK1:[TEST5]EMPIDS_OVER.RDA;1 needs AIJ sequence number 0
%RMU-I-AIJBADPAGE, inconsistent page 60 from storage area
DISK1:[TEST5]EMPIDS_LOW.RDA;1 needs AIJ sequence number 0
%RMU-I-AIJBADPAGE, inconsistent page 123 from storage area
DISK1:[TEST5]EMPIDS_OVER.RDA;1 needs AIJ sequence number 0
%RMU-I-LOGRECDB, recovering database file
DISK2:[TEST5]MF_PERSONNEL.RDB;1
%RMU-I-LOGOPNAIJ, opened journal file DISK3:[TEST5]MF_PERSONNEL.AIJ;1
%RMU-I-AIJONEDONE, AIJ file sequence 0 roll-forward operations
completed
%RMU-I-LOGRECOVR, 1 transaction committed
%RMU-I-LOGRECOVR, 0 transactions rolled back
%RMU-I-LOGRECOVR, 0 transactions ignored
%RMU-I-AIJNOACTIVE, there are no active transactions
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJALLDONE, after-image journal roll-forward operations
completed
%RMU-I-LOGSUMMARY, total 1 transaction committed
%RMU-I-LOGSUMMARY, total 0 transactions rolled back
%RMU-I-LOGSUMMARY, total 0 transactions ignored
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJGOODPAGE, page 11 from storage area
DISK1:[TEST5]EMPIDS_OVER.RDA;1 is now consistent
%RMU-I-AIJGOODPAGE, page 60 from storage area
DISK1:[TEST5]EMPIDS_LOW.RDA;1 is now consistent
%RMU-I-AIJGOODPAGE, page 123 from storage area
DISK1:[TEST5]EMPIDS_OVER.RDA;1 is now consistent
%RMU-I-AIJFNLSEQ, to start another AIJ file recovery, the sequence
number needed will be 0
Example 5
In the following example, note that the backed up AIJ files are
specified in the order B1, B3, B2, B4 representing sequence
numbers 1, 3, 2, 4. The /ORDER_AIJ_FILES sorts the journals to
be applied into ascending sequence order and then is able to
remove B1 from processing because the database recovery starts
with AIJ file sequence 2 as shown in the RMU/RESTORE output.
$ RMU/RESTORE/NEW/NOCDD/NOAFTER FOO
%RMU-I-RESTXT_00, Restored root file DUA0:[DB]FOO.RDB;16
.
.
.
%RMU-I-AIJRECFUL, Recovery of the entire database starts with
AIJ file sequence 2
%RMU-I-COMPLETED, RESTORE operation completed at 24-MAY-2007 12:23:32.99
$!
$ RMU/RECOVER/LOG/ORDER_AIJ_FILES B1,B3,B2,B4
.
.
.
%RMU-I-LOGOPNAIJ, opened journal file DUA0:[DB]B2.AIJ;24
%RMU-I-LOGRECSTAT, transaction with TSN 0:256 ignored
%RMU-I-LOGRECSTAT, transaction with TSN 0:257 ignored
%RMU-I-RESTART, restarted recovery after ignoring 2 committed transactions
%RMU-I-AIJONEDONE, AIJ file sequence 2 roll-forward operations completed
%RMU-I-LOGRECOVR, 0 transactions committed
%RMU-I-LOGRECOVR, 0 transactions rolled back
%RMU-I-LOGRECOVR, 2 transactions ignored
%RMU-I-AIJNOACTIVE, there are no active transactions
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJNXTSEQ, to continue this AIJ file recovery, the
sequence number needed will be 3
.
.
.
Example 6
The following example shows the "/CONFIRM=ABORT" syntax used so
that RMU/RECOVER will not continue rolling forward if a sequence
gap is detected. This is the default behavior if /NOCONFIRM is
specified or for batch jobs. Note that the exit status of RMU
will be "%RMU-E-AIJRECESQ" if the recovery is aborted due to a
sequence gap. It is always a good policy to check the exit status
of RMU, especially when executing RMU in batch jobs.
RMU/RECOVER/CONFIRM=ABORT/LOG/ROOT=user$test:foo faijbck1,faijbck2,faijbck4
%RMU-I-LOGRECDB, recovering database file DEVICE:[DIRECTORY]FOO.RDB;1
%RMU-I-LOGOPNAIJ, opened journal file DEVICE:[DIRECTORY]FAIJBCK4.AIJ;1
at 25-FEB-2009 17:27:42.29
%RMU-W-AIJSEQAFT, incorrect AIJ file sequence 8 when 7 was expected
%RMU-E-AIJRECESQ, AIJ roll-forward operations terminated due to sequence error
%RMU-I-AIJALLDONE, after-image journal roll-forward operations completed
%RMU-I-LOGSUMMARY, total 2 transactions committed
%RMU-I-LOGSUMMARY, total 0 transactions rolled back
%RMU-I-LOGSUMMARY, total 0 transactions ignored
%RMU-I-AIJFNLSEQ, to start another AIJ file recovery, the sequence number
needed will be 7
%RMU-I-AIJNOENABLED, after-image journaling has not yet been enabled
Example 7
The following example shows the "/CONFIRM=CONTINUE" syntax used
so that RMU/RECOVER will continue rolling forward if a sequence
gap is detected.
RMU/RECOVER/CONFIRM=CONTINUE/LOG/ROOT=user$test:foo faijbck1,faijbck2,faijbck4
%RMU-I-LOGRECDB, recovering database file DEVICE:[DIRECTORY]FOO.RDB;1
%RMU-I-LOGOPNAIJ, opened journal file DEVICE:[DIRECTORY]FAIJBCK4.AIJ;1
at 25-FEB-2009 17:26:04.00
%RMU-W-AIJSEQAFT, incorrect AIJ file sequence 8 when 7 was expected
%RMU-I-AIJONEDONE, AIJ file sequence 8 roll-forward operations completed
%RMU-I-LOGRECOVR, 1 transaction committed
%RMU-I-LOGRECOVR, 0 transactions rolled back
%RMU-I-LOGRECOVR, 0 transactions ignored
%RMU-I-AIJNOACTIVE, there are no active transactions
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJNXTSEQ, to continue this AIJ file recovery, the sequence number
needed will be 9
%RMU-I-AIJALLDONE, after-image journal roll-forward operations completed
%RMU-I-LOGSUMMARY, total 3 transactions committed
%RMU-I-LOGSUMMARY, total 0 transactions rolled back
%RMU-I-LOGSUMMARY, total 0 transactions ignored
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJFNLSEQ, to start another AIJ file recovery, the sequence number
needed will be 9
%RMU-I-AIJNOENABLED, after-image journaling has not yet been enabled
25.7 – Resolve
Recovers a corrupted database and resolves an unresolved
distributed transaction by completing the transaction.
See the Oracle Rdb7 Guide to Distributed Transactions for
complete information on unresolved transactions and for
information on the transactions managers (DECdtm and Encina)
supported by Oracle Rdb.
25.7.1 – Description
Use the RMU Recover Resolve command to commit or abort any
unresolved distributed transactions in the after-image journal
(.aij) file. You must complete the unresolved transactions to the
same state (COMMIT or ABORT) in every .aij file affected by the
unresolved transactions.
The RMU Recover Resolve command performs the following tasks:
o Displays identification information for an unresolved
transaction.
o Prompts you for the state to which you want the unresolved
transaction resolved (if you did not specify the State
qualifier on the command line). If you are using DECdtm to
manage the transaction, you can specify COMMIT, ABORT, or
IGNORE. If you are using an XA transaction, you can specify
COMMIT or ABORT.
o Prompts for confirmation of the state you specified
o Commits, aborts, or ignores the unresolved transaction
o Continues until it displays information for all unresolved
transactions
25.7.2 – Format
(B)0[mRMU/Recover/Resolve aij-file-name
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Active_IO=max-reads x See the RMU/Recover command
/Aij_Buffers=integer x See the RMU/Recover command
/Areas[=storage-area[,...]] x See the RMU/Recover command
/[No]Confirm x See description
/Format={Old_File|New_Tape} x See the RMU/Recover command
/Label=(label-name-list) x See the RMU/Recover command
/[No]Log x See the RMU/Recover command
/[No]Media_Loader x See the RMU/Recover command
/[No]Online x See the RMU/Recover command
/[No]Rewind x See the RMU/Recover command
/Root=root-file-name x See the RMU/Recover command
/State=option x See description
/[No]Trace x See the RMU/Recover command
/Until=date-time x See the RMU/Recover command
25.7.3 – Parameters
25.7.3.1 – aij-file-name
The name of the file containing the after-image journal. This
cannot be an optimized after-image journal (.oaij) file. The
default file extension is .aij.
25.7.4 – Command Qualifiers
25.7.4.1 – Confirm
Confirm
Noconfirm
Prompts you for confirmation of each transaction state you alter.
The default for interactive processing is Confirm.
Specify the Noconfirm qualifier to suppress this prompt. The
default for batch processing is Noconfirm.
25.7.4.2 – State
State=option
Specifies the state to which all unresolved transactions will be
resolved.
If you are using DECdtm to manage your distributed transaction,
options for the State qualifier are:
o Commit-Commits all unresolved transactions.
o Abort- Aborts all unresolved transactions.
o Ignore-Does not resolve any transactions.
If you are using Encina to manage your distributed transaction,
options for the State qualifier are:
o Commit-Commits all unresolved transactions.
o Abort- Aborts all unresolved transactions.
If you do not specify the State qualifier, Oracle RMU prompts
you to enter an action, for each unresolved transaction in
that .aij file. If DECdtm is managing your transaction and you
enter Ignore, Oracle RMU-instead of resolving the transaction-
attempts to contact the coordinator to resolve the transaction.
The transaction remains unresolved until the coordinator becomes
available again and instructs the transaction to complete or
until you manually complete the transaction by using the RMU
Recover Resolve command again. For more information about the
activities of the coordinator, see the Oracle Rdb7 Guide to
Distributed Transactions.
Because a coordinator is not involved with transactions managed
by Encina, the Ignore option is not valid for XA transactions.
25.7.5 – Usage Notes
o To use the RMU Recover Resolve command for a database, you
must have the RMU$RESTORE privilege in the root file for the
database or the OpenVMS SYSPRV or BYPASS privilege.
o If you have restored the database by using the New qualifier
and have not deleted the corrupted database, use the Root
qualifier to override the original file specification for the
database root file.
o After it rolls forward from the .aij file specified on the
command line, Oracle RMU prompts you for the name of the next
.aij file. If there are more .aij files to roll forward, enter
the file name, including the version number for that .aij
file. If there are no other .aij files, press the Return key.
For more information about rolling forward and determining
transaction sequence numbers for .aij files, see the Oracle
Rdb Guide to Database Maintenance.
o Note the following points regarding using Oracle Rdb with the
Encina transaction manager:
- Only databases that were created under Oracle Rdb V7.0 or
higher, or converted to V70 or higher, can participate in
XA transactions.
- To start a distributed transaction, you must have the
DISTRIBTRAN database privilege for all databases involved
in the transaction.
- Oracle Rdb supports only explicit distributed transactions
with Encina. This means that your application must
explicitly call the Encina routines to start and end the
transactions.
25.7.6 – Examples
Example 1
The following command recovers the mf_personnel database and
rolls the database forward from the old .aij file to resolve the
unresolved distributed transactions. Because the State qualifier
is not specified, Oracle RMU will prompt the user for a state for
each unresolved transaction.
$ RMU RECOVER/RESOLVE MF_PERSONNEL.AIJ;1
Example 2
This example specifies that all unresolved transactions in the
mf_personnel.aij file be committed.
$ RMU/RECOVER/RESOLVE/STATE=COMMIT MF_PERSONNEL.AIJ
For more examples of the RMU Recover Resolve command, see the
Oracle Rdb7 Guide to Distributed Transactions.
26 – Repair
Corrects several types of database problems. You can use the RMU
Repair command to:
o Repair all types of space area management (SPAM) page
corruptions by reconstructing the SPAM pages in one or more
storage areas.
o Repair all area bit map (ABM) page format errors.
o Repair all page tail errors to the satisfaction of the RMU
Verify operation by making sure that every database page is
in a logical area and contains the appropriate information for
that logical area.
o Correct some performance problems that might otherwise have to
be corrected by exporting and importing the database.
o Set damaged or missing segmented string (LIST OF BYTE VARYING)
areas that are stored in write-once areas to null.
The repair operation cannot correct corrupted user data, or
corrupted indexes; use other commands such as the RMU Restore,
the RMU Recover, the SQL IMPORT, or the RMU Load command and
delete the affected structures to correct these problems.
NOTE
Use of the Abm or the Initialize=Tsns qualifier disables
after-image journaling. After issuing an RMU Repair command
with these qualifiers, back up the database and reenable
journaling manually.
26.1 – Description
Because RMU Repair cannot correct every type of corruption, or
guarantee improved performance, Oracle Corporation recommends
that you do not use the RMU Repair command unless you have a
backup copy or exported copy of your database. You can return
to this backup copy of the database if your repair efforts are
ineffective.
The RMU Repair command operates off line and not in the context
of a transaction, so no records are written to the database's
.aij file by RMU Repair, and the repaired database cannot be
rolled forward with the RMU Recover command. Oracle Corporation
recommends that you make a backup copy of the database after
using the RMU Repair command; the repair operation issues a
message to this effect. Oracle RMU also issues a warning when
you use this command on a database with after-image journaling
enabled.
26.2 – Format
(B)0[m
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Abm x /Noabm
/[No]All_Segments x All segments
/Areas [={storage-area-list or *}] x See description
/Checksum x See description
/[No]Initialize=initialize-options x /Noinitialize
/[No]Spams x See description
/Tables [=table-list] x All nonsystem tables
/Worm_Segments x None
26.3 – Parameters
26.3.1 – root-file-spec
A file specification for the database root file for which you
want to repair corruption or improve performance.
26.4 – Command Qualifiers
26.4.1 – Abm
Abm
Noabm
Causes the reconstruction of the logical area bit map (ABM)
pages for areas specified with the Areas qualifier. After-image
journaling is disabled when you specify the Abm qualifier. You
must explicitly enable after-image journaling after the RMU
Repair command completes if you want journaling enabled.
The NoAbm qualifier specifies that ABM pages are not to be
reconstructed; this is the default.
26.4.2 – All Segments
All_Segments
Noall_Segments
The All_Segments qualifier specifies that RMU Repair should
retrieve all segments of a segmented string; the Noall_Segments
qualifier specifies that RMU Repair should only retrieve the
first segment of a segmented string.
Specify the Noall_Segments qualifier if you know that the list
storage map for any segmented strings stored on the specified
areas might have contained multiple areas. For example, if the
storage map was created using the following SQL command, Oracle
Rdb would store all the segmented strings on AREA1 until AREA1
became full. If AREA1 became full, Oracle Rdb would continue to
write the rest of the segments into AREA2. Suppose AREA2 becomes
corrupt. In this case, retrieving the first segment from AREA1
is not sufficient; all segments must be retrieved to determine if
part of the segmented string is missing.
CREATE STORAGE MAP FOR LIST STORE IN (AREA1, AREA2) FOR (TABLE1)
IN RDB$SYSTEM;
Specifying the Areas qualifier and the All_Segments qualifier
is unnecessary and redundant because specifying the All_Segments
qualifier causes RMU Repair to check all storage areas regardless
of where the segmented string was stored initially.
26.4.3 – Areas
Areas[={storage-area-list or *}]
Specifies the storage areas in the database you want to repair.
You can specify storage areas by name or by the area's ID number.
By default, all the storage areas in the database are repaired.
If you specify more than one storage area, separate the storage
area names or ID numbers in the storage-area-list with a comma,
and enclose the list within parentheses.
26.4.4 – Checksum
Checksum
Reads every page in the database storage areas to verify that the
checksum on each page is correct. If the checksum on the page is
incorrect, it is replaced with the correct checksum.
Use the Areas qualifier to specify which storage areas RMU
Repair should check. If you do not specify the Areas qualifier,
all pages in all storage areas are checked and updated (if
incorrect).
This qualifier can be used whether or not users are attached to
the database.
This qualifier is not valid for use with any other qualifiers
except the Areas qualifier.
26.4.5 – Initialize
Initialize=initialize-options
Noinitialize
Allows you to specify initialization options. If more than one
option is specified, separate the options with a comma, and
enclose the list of options within parentheses.
The following options are available for the Initialize qualifier:
o Free_Pages
The Initialize=Free_Pages qualifier initializes database pages
that do not contain data in the selected storage areas (that
have a uniform page format). You can use the Initialize=Free_
Pages qualifier to correct BADPTLARE errors found by the
RMU Verify command and also to free pages from a table
that has many deleted rows. If you specify the default, the
Noinitialize qualifier, no database pages are initialized.
Frequently, you will receive one or more RMU-W-ABMBITTERR
error messages after you issue the RMU Repair command with
the Initialize=Free_Pages qualifier. This occurs because the
initialization of pages can create new ABM errors. Correct
these errors by issuing the RMU Repair command with the
Abm qualifier. (However, note that you cannot specify the
Initialize=Free_Pages qualifier and the Abm qualifier on the
same command line.) If you ignore the RMU-W-ABMBITTERR error
messages, extra I/O operations will be performed (one for each
RMU-W-ABMBITTERR error you received) when a database query
causes a sequential scan of an entire table.
If a table residing in a storage area that has a uniform
page format is frequently accessed sequentially, the cost
of the sequential access is determined by the number of
allocated pages. If the maximum size allocated for the table
is much larger than the table's average size, the cost of the
sequential access can be excessive. By using the RMU Repair
command with the Initialize=Free_Pages qualifier, you can
purge the allocated but unused database pages from the table.
In some cases, there may be a decrease in performance when
you insert new data into the table after using this option.
As with all Repair options, you should test the performance
of the database after executing the command and be prepared to
restore the backup made before executing the Repair command if
you find that the command results in decreased performance.
The initialization of free pages requires access to the Oracle
Rdb system tables. You should not initialize free pages until
you know that the RDB$SYSTEM storage area (where the system
tables are stored) is not corrupted.
o Larea_Parameters=options-file
This option specifies an options file (default file extension
.opt) that contains a list of logical areas and parameter
values that RMU Repair uses to update the area inventory page
(AIP) before it builds the space area management (SPAM) pages.
The Larea_Parameters options file contains lines in the
following format:
name [/Areas=name][/Delete][/[No]Thresholds=(n[,n[,n]])[/Length=n][/Type=option]
A comment can be appended to the line (an exclamation point
(!) is the comment character), and a line can be continued
(as in DCL) by ending it with a hyphen (-).
The logical area can be specified by name or identification
number (ID). The logical area named must be present in the
AIP, or an error is generated. The Larea_Parameters options
are further described as follows:
- Areas=name
Restricts this line to the logical area that resides
in the specified storage area. The storage area can be
specified by name or ID. By default, all logical areas with
a matching name are altered independently of the storage
area in which they reside.
You can specify storage area ID numbers with the Areas
qualifier.
- Delete
Specifies that the logical area should be marked as
deleted. You will corrupt your database if you delete a
logical area that is referenced by Oracle Rdb metadata.
- Length=n
The Initialize=Length option specifies the record length to
store in the logical area inventory entry. RMU Repair uses
this value to calculate SPAM thresholds.
When columns are deleted from or added to a table, the
record length stored in the logical area inventory entry is
not updated. Therefore the search for space needed to store
a new record may be inefficient, and the SPAM thresholds
will not be set properly. You can solve this problem by
first correcting the length in the logical area inventory
entry, then generating corrected SPAM pages using the RMU
Repair command. See Example 2 in the Examples help entry
under this command.
- Thresholds=(n [,n [,n]])
NoThresholds
This option specifies the logical area SPAM thresholds.
This is useful only for logical areas that reside in a
storage area with a uniform page format. If thresholds are
set, they are ignored in a storage area with a mixed page
format.
See the Oracle Rdb7 Guide to Database Performance and
Tuning for information on setting SPAM thresholds.
The Nothresholds option specifies that logical area
thresholds be disabled.
- Type=keyword
By specifying a Type, you can update the on-disk logical
area type in the AIP. For databases created prior to Oracle
Rdb release 7.0.1, the logical area type information in the
AIP is unknown. However, the RMU Show Statistics utility
depends on this information to display information on a
per-logical-area basis. A logical area is a table, B-tree
index, hash index, or any partition of one of these.
In order to update the on-disk logical area type in the
AIP, specify the type as follows:
Type=Table
Specifies that the logical area is a data table, such as
is created with the SQL CREATE TABLE statement.
Type=Btree
Specifies that the logical area is a B-tree index, such
as is created with the SQL CREATE INDEX TYPE IS SORTED
statement.
Type=Hash
Specifies that the logical area is a hash index, such
as is created with the SQL CREATE INDEX TYPE IS HASHED
statement.
Type=System
Specifies that the logical area is a system record
that is used to identify hash buckets. Users cannot
explicitly create this type of logical area. This type
should not be used for the RDB$SYSTEM logical areas. It
does not identify system relations.
Type=Blob
Specifies that the logical area is a BLOB (LIST OF BYTE
VARYING) repository.
There is no error checking of the type specified for
a logical area. The specified type does not affect the
collection of statistics, nor does it affect the readying
of the affected logical areas. However, an incorrect type
will cause incorrect statistics to be reported by the RMU
Show Statistics utility.
o Only_Larea_Type
The Initialize=Only_Larea_Type option specifies that only the
logical area type field is to be updated in the area inventory
page (AIP).
o Snapshots
The Snapshots option allows you to create and initialize new
snapshot files. In addition, it removes corrupt snapshot area
pages from the Corrupt Page Table (CPT). This is much faster
than using the RMU Restore command to do the same thing,
especially when just one snapshot file is lost and needs to
be created again. The default is not to create new files.
When you specify the Confirm option with the
Initialize=Snapshots option (Initialize=Snapshots=Confirm),
you can use the RMU Repair command not only to initialize, but
also to optionally rename, move, or change the allocation of
snapshot files.
These operations might be necessary when a disk with a
snapshot file has a hardware problem or is removed in a
hardware upgrade, or when a snapshot file has grown too large
and you want to truncate it.
The Confirm option causes RMU Repair to prompt you for a
name and allocation for one or more snapshot files. If you
use the Areas qualifier, you can select the snapshot files
in the database that you want to modify. If you omit the
Areas qualifier, all the snapshot files for the database are
initialized and RMU Repair prompts you interactively for an
alternative file name and allocation for each snapshot file.
By specifying a new file name for a snapshot file, you can
change the location of the snapshot file. By specifying a new
allocation for a snapshot file, you can truncate a snapshot
file or make it larger.
o Tsns
The Initialize=Tsns option resets the database transaction
state. The default is to not alter the transaction state.
After-image journaling is disabled when you specify the
Initialize=Tsns option. You must explicitly enable after-image
journaling after the RMU Repair command completes if you want
journaling enabled.
This operation is useful when the database transaction
sequence number (TSN) approaches the maximum allowable value
and the TSN values must be initialized to zero. The TSN value
is contained in a quadword with the following decimal format:
high longword : low longword
The high longword can hold a maximum user value of 32768
(215) and the low longword can hold a maximum user value of
4,294,967,295 (232). A portion of the high-longword is used by
Oracle Rdb for overhead.
Initialization of the TSN values requires reading and writing
to each page of the database, so the Areas qualifier is not
meaningful. It also requires initialization of the snapshot
areas even if the Snapshots option has not been specified.
The Tsns initialization option carries the following
restrictions:
- It cannot be performed if the Replication Option for Rdb
is being used unless all transfers have been completed. RMU
Repair will ask for confirmation if an RDB$TRANSFERS table
is defined.
- Old journal files will not be applicable to this repaired
database. After TSNs have been initialized, you must
reenable after-image journaling if you want journaling
enabled.
After the RMU Repair command completes, a full and complete
backup operation should be performed on the database as
soon as is practical. This operation ensures that new
journaled changes can be applied to the restored database
in the event that a restore operation should become
necessary.
26.4.6 – Spams
Spams
Nospams
Reconstructs the SPAM pages for the areas you specify with the
Areas qualifier. If you specify the Nospams qualifier, the SPAM
pages are not reconstructed. The default is the Spam qualifier
if you do not specify any of the following qualifiers for the RMU
Repair command:
o ABM
o Initialize=Free_Pages
o Initialize=Snapshots
o Initialize=Snapshots=Confirm
If you use any of these qualifiers, the NoSpam qualifier is the
default.
When columns are deleted from or added to a table, the record
length stored in the logical area inventory entry is not updated.
Therefore the search for space needed to store a new record may
be inefficient, and the SPAM thresholds will not be set properly.
You can solve this problem by first correcting the length in
the logical area inventory entry, then generating corrected SPAM
pages using the RMU Repair command. See Example 2 in the Examples
help entry under this command.
26.4.7 – Tables
Tables[=table-list]
Specifies the list of tables that you want RMU Repair to check
for complete segmented strings.
If no tables are listed, then all nonsystem tables are examined.
(System tables do not store their segmented strings in write-once
areas.) Note that RMU Repair has no knowledge of which storage
areas contain segmented strings from a particular table; thus,
the default is to search all tables.
26.5 – Usage Notes
o To use the RMU Repair command for a database, you must have
the RMU$ALTER privilege in the root file access control
list (ACL) for the database or the OpenVMS SYSPRV or BYPASS
privilege.
o Enable detected asynchronous prefetch to achieve the best
performance of this command. Beginning with Oracle Rdb V7.0,
by default, detected asynchronous prefetch is enabled. You
can determine the setting for your database by issuing the RMU
Dump command with the Header qualifier.
If detected asynchronous prefetch is disabled, and you do not
want to enable it for the database, you can enable it for your
RMU Repair operations by defining the following logicals at
the process level:
$ DEFINE RDM$BIND_DAPF_ENABLED 1
$ DEFINE RDM$BIND_DAPF_DEPTH_BUF_CNT P1
P1 is a value between 10 and 20 percent of the user buffer
count.
o The Areas qualifier can be used with indirect file references.
See the Indirect-Command-Files help entry.
o Oracle Corporation recommends that you use the RMU Backup
command to perform a full backup operation on your database
before using the RMU Repair command on the database.
o Use the SQL SHOW STORAGE AREA statement to display the new
location of a snapshot (.snp) file and the RMU Dump command
with the Header qualifier to display the new allocation.
o Be careful when you specify names for new .snp files with the
RMU Repair command. If you specify the name of a file that
already exists and was created for the database, it will be
initialized as you requested.
If you mistakenly initialize a live database file in this way,
do not use the database until the error is corrected. Use the
RMU Restore command to restore the database to the condition
it was in when you backed it up just prior to issuing the RMU
Repair command. If you did not back up the database before
issuing the RMU Repair command, you must restore the database
from your most recent backup file and then recover from .aij
files (if the database had after-image journaling enabled).
If you specify the wrong .snp file (for example, if you
specify jobs.snp for all the .snp file name requests in
Example 3 in the Examples help entry under this command),
you can correct this by issuing the RMU Repair command again
with the correct .snp file names.
After the RMU Repair command completes, delete old .snp
files and use the RMU Backup command to perform a full backup
operation on your database.
26.6 – Examples
Example 1
The following command repairs SPAM page corruption for all the
storage areas in the mf_personnel database. No area bit map
(ABM) pages are reconstructed because the Abm qualifier is not
specified.
$ RMU/REPAIR MF_PERSONNEL
Example 2
When columns are deleted from or added to a table, the record
length stored in the logical area inventory entry is not updated.
Therefore the search for space needed to store a new record may
be inefficient, and the SPAM thresholds will not be set properly.
You can solve this problem by first correcting the length in
the logical area inventory entry, then generating corrected SPAM
pages using the RMU Repair command.
For example, suppose the Departments table was stored in the
departments.rda uniform page format storage area and the Budget_
Projected column (integer data type = 4 bytes) was deleted. As
a result of this deletion, the row length changed from 47 bytes
to 43 bytes. You can specify a smaller record length (43 bytes)
in the fix_departments.opt options file to more efficiently use
space in the storage area.
$ CREATE FIX_DEPARTMENTS.OPT
DEPARTMENTS /LENGTH=43
Then, the following RMU Repair command specifies the record
length to store in the logical area inventory entry for this
logical area and rebuilds the SPAM pages:
$ RMU/REPAIR/SPAMS/INITIALIZE=LAREA_PARAMETERS=FIX_DEPARTMENTS.OPT -
_$ MF_PERSONNEL
Example 3
The following RMU Repair command initializes and renames
departments.snp; initializes and moves salary_history.snp; and
initializes, moves, and truncates jobs.snp:
$ RMU/REPAIR/NOSPAMS/INITIALIZE=SNAPSHOTS=CONFIRM -
_$ /AREAS=(DEPARTMENTS,JOBS,SALARY_HISTORY) MF_PERSONNEL
%RMU-I-FULBACREQ, A full backup of this database should be
performed after RMU Repair
Area DEPARTMENTS snapshot filename
[SQL1:[TEST]DEPARTMENTS.SNP;1]: NEW_DEPT
Area DEPARTMENTS snapshot file allocation [10]?
Area SALARY_HISTORY snapshot filename
[SQL1:[TEST]SALARY_HISTORY.SNP;1]: SQL2:[TEST]
Area SALARY_HISTORY snapshot file allocation [10]?
Area JOBS snapshot filename [SQL1:[TEST]JOBS.SNP;1]: SQL2:[TEST2]
Area JOBS snapshot file allocation [10]? 5
Example 4
The following RMU Repair command finds incorrect checksums in the
EMPIDS_LOW storage area and updates them to reflect the correct
checksum:
$ RMU/REPAIR MF_PERSONNEL.RDB/AREA=EMPIDS_LOW/CHECKSUM
Example 5
The following command updates an AIP type for a table:
$ RMU/REPAIR MF_PERSONNEL /INITIALIZE=LAREA_PARAMETERS=TABLE.OPT
Type the TABLE.OPT file to show the contents of the file.
$ TYPE TABLE.OPT
EMPLOYEES /TYPE=TABLE
Example 6
The following command updates an AIP type for a storage area:
$ RMU/REPAIR MF_PERSONNEL /INITIALIZE=LAREA_PARAMETERS=AREAS.OPT
Type the AREAS.OPT file to show the contents of the file.
$ TYPE AREAS.OPT
EMPLOYEES /AREA=EMPIDS_OVER /TYPE=TABLE
27 – Resolve
Resolves all unresolved distributed transactions for the
specified database. For more information on unresolved
transactions, see the Oracle Rdb7 Guide to Distributed
Transactions and the Oracle Rdb Release Notes.
27.1 – Description
Use the RMU Resolve command to commit or abort any unresolved
distributed transactions in the database. You must resolve the
unresolved transactions to the same state (Commit or Abort) in
every database affected by the unresolved transactions.
RMU Resolve performs the following tasks:
o Displays identification information for an unresolved
transaction.
o Prompts you for the state (Commit or Abort) to which you want
the unresolved transaction resolved (if you did not specify
the State qualifier on the command line).
o Prompts you for confirmation of the state you chose.
o Commits or aborts the unresolved transaction. If you commit or
abort the unresolved transaction, it is resolved and cannot be
resolved again.
o Continues to display and prompt for states for subsequent
unresolved transactions until it has displayed information for
all unresolved transactions.
Use the Parent_Node, Process, or Tsn qualifiers to limit the
number of unresolved transactions that Oracle RMU displays.
Use the Users and State=Blocked qualifiers with the RMU Dump
command to determine values for the Parent_Node, Process, and Tsn
qualifiers.
27.2 – Format
(B)0[mRMU/Resolve root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Confirm x See description
/[No]Log x Setting of DCL VERIFY flag
/Parent_Node=node-nam e x See description
/Process=process-id x See description
/State=options x None
/Tsn=tsn x See description
27.3 – Parameters
27.3.1 – root-file-spec
The database root file for which you want to resolve unresolved
transactions.
27.4 – Command Qualifiers
27.4.1 – Confirm
Confirm
Noconfirm
Prompts you for confirmation of each unresolved transaction. This
is the default for interactive processing.
Specify the Noconfirm qualifier to suppress this prompt. This is
the default for batch processing.
27.4.2 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request that summary
information about the resolve operation be reported to SYS$OUTPUT
and the Nolog qualifier to prevent this reporting. If you specify
neither, the default is the current setting of the DCL VERIFY
flag. (The DCL SET VERIFY command controls the setting of the DCL
VERIFY flag.)
27.4.3 – Parent Node
Parent_Node=node-name
Specifies the node name to limit the selection of transactions
to those originating from the specified node. If you omit
the Parent_Node qualifier, RMU Resolve includes transactions
originating from all nodes.
You cannot specify the Tsn or Process qualifier with the Parent_
Node qualifier.
The Parent_Node qualifier is not valid for XA transactions.
27.4.4 – Process
Process=process-id
Specifies the process identification to limit the selection of
transactions to those associated with the specified process. If
you omit this qualifier, RMU Resolve includes all processes with
transactions attached to the specified database.
You cannot specify the Parent_Node or Tsn qualifier with the
Process qualifier.
27.4.5 – State
State=options
Specifies the state to which all unresolved transactions be
resolved.
Options for the State qualifier are:
o Commit-Commits unresolved transactions.
o Abort-Aborts unresolved transactions.
If you do not specify the State qualifier, RMU Resolve prompts
you to enter an action, Commit or Abort, for each unresolved
transaction on that database.
27.4.6 – Tsn
Tsn=tsn
Specifies the transaction sequence number (TSN) of the unresolved
transactions whose state you want to modify.
The TSN value is contained in a quadword with the following
decimal format:
high longword : low longword
The high longword can hold a maximum user value of 32768
(215) and the low longword can hold a maximum user value of
4,294,967,295 (232). A portion of the high longword is used by
Oracle Rdb for overhead.
When you specify a TSN, you can omit the high longword and the
colon if the TSN fits in the low longword. For example 0:444 and
444 are both valid TSN input values.
If you omit the Tsn qualifier, RMU Resolve includes all the
unresolved transactions. You cannot specify the Parent_Node or
the Process qualifier with the Tsn qualifier.
27.5 – Usage Notes
o To use the RMU Resolve command for a database, you must
have the RMU$RESTORE privilege in the root file ACL for the
database or the OpenVMS SYSPRV or BYPASS privilege.
27.6 – Examples
Example 1
The following command specifies that the first displayed
unresolved transaction in the MF_PERSONNEL database be changed
to the Abort state and rolled back:
$ RMU/RESOLVE/LOG/STATE=ABORT MF_PERSONNEL
Example 2
The following command will display a list of all transactions
coordinated by node GREEN and might be useful if node GREEN
failed while running an application that used the DECdtm two-
phase commit protocol:
$ RMU/RESOLVE/PARENT_NODE=GREEN MF_PERSONNEL
Example 3
The following command displays a list of all transactions
initiated by process 41E0364A. The list might be useful for
resolving transactions initiated by this process if the process
were deleted.
$ RMU/RESOLVE/PROCESS=41E0364A MF_PERSONNEL
Example 4
The following command completes unresolved transactions for the
MF_PERSONNEL database, and confirms and logs the operation:
$ RMU/RESOLVE/LOG/CONFIRM MF_PERSONNEL
For more examples of the RMU Resolve command, see the Oracle Rdb7
Guide to Distributed Transactions.
28 – Restore
Restores a database to the condition it was in at the time a
full or incremental backup operation was performed with an
RMU Backup command. In addition, if after-image journal (.aij)
files have been retained, RMU Restore attempts to apply any pre-
existing .aij files to recover the database completely. See the
Description help entry under this command for details on the
conditions under which RMU Restore attempts an automatic .aij
file recovery as part of the restore operation.
When you use the RMU Restore command to restore the database
to a system with a more recent version of Oracle Rdb software,
an RMU Convert command with the Noconfirm and Commit qualifiers
is automatically executed as part of RMU Restore. Therefore, by
executing the RMU Restore command, you convert that database
to the current version. See the Oracle Rdb Installation and
Configuration Guide for the proper backup procedure prior
to installing a new release of Oracle Rdb and restoring (or
converting) databases.
When you use the RMU Restore command to restore a database that
was recently RMU/Converted but with the /NoCommit qualifier,
the behavior is different than that stated above. /Commit is
the default for an RMU Restore of an uncommited database (a
database that contains both current and previous versions of the
metadata that was converted by specifying RMU/CONVERT/NOCOMMIT or
RMU/RESTORE/NOCOMMIT) but ONLY if the noncommited database being
restored is NOT of the current Rdb version. RMU/RESTORE/COMMIT
and RMU/RESTORE/NOCOMMIT only take effect if RMU/RESTORE needs
to call RMU/CONVERT because the database being restored is of a
previous Rdb version.
If the /COMMIT is specified or defaulted for the Restore of a
database of the current level, it is ignored. In this case, an
RMU/CONVERT/COMMIT must be used to commit the previous uncommited
restore or conversion.
NOTE
When you restore a database, default or propagated OpenVMS
access control entries (ACEs) for the database root (.rdb)
file take precedence over any Oracle RMU database access you
might have.
Therefore, if default or propagated entries are in use,
you must use the RMU Show Privilege and RMU Set Privilege
commands after a restore operation completes to verify and
correct the Oracle RMU access. (You can tell if default or
propagated entries are in use because RMU Restore displays
the warning message "RMU-W-PREVACL, Restoring the root ACL
over a pre-existing ACL". This is a normal condition if the
RMU Restore command was invoked from the CDO utility.)
To use RMU Show Privilege and RMU Set Privilege commands,
you must have the rights to edit the access control
list (ACL) using RMU$SECURITY access (which is VMS BIT_
15 access in the access control entry (ACE)) and also
(READ+WRITE+CONTROL) access. (Note that you can grant
yourself BIT_15 access by using the DCL SET ACL command
if you have (READ+WRITE+CONTROL) access.
If you do not have the required access after a restore
operation to make the needed changes, someone with the
required access or OpenVMS BYPASS or SECURITY access must
examine and correct the ACL.
This behavior exists in Oracle RMU to prevent someone from
using Oracle RMU to override the existing OpenVMS security
policy.
28.1 – Description
RMU Restore rebuilds a database from a backup file, produced
earlier by an RMU Backup command, to the condition the database
was in when the backup operation was performed and attempts to
automatically recover the .aij files to provide a fully restored
and recovered database.
You can specify only one backup file parameter in an RMU Restore
command. If this parameter is a full backup file, you cannot use
the Incremental qualifier. However, you must use the Incremental
qualifier if the parameter names an incremental backup file.
RMU Restore attempts automatic .aij file recovery by default when
you issue a database restore command if you are using fixed-
size .aij files, if .aij files have been retained, and if a
database conversion has not been performed. (The .aij files are
not retained when you specify any of the following qualifiers:
Aij_Options, After_Journal, or Duplicate.) RMU Restore does not
attempt automatic .aij file recovery if you have backed up any
of your .aij files (using the RMU Backup After_Journal command)
because RMU Restore has no knowledge of those backup files.
In addition, success of the automatic .aij file recovery
operation requires that the following criteria be met:
o Fixed-size after-image journaling is in effect.
o The .aij files must be on disk (not on tape).
o The .aij files must not have been marked as inaccessible at
the time the database backup operation was performed.
o The .aij files must exist and have proper privileges for both
read and write operations.
o The .aij files must be able to be accessed exclusively;
failure indicates that an .aij file is in use by another
database user.
o The .aij files must have a nonzero length.
o The .aij files must have valid header information that
corresponds to the current Oracle Rdb product and version
number.
o The sequence number in the .aij file header must not conflict
with the restored definition in the database root information.
o The original .rdb file name must not exist.
NOTE
RMU Restore attempts automatic .aij file recovery when you
restore a database from a full, incremental, by-area, or
by-page backup file. However, in some cases, you will want
to disable this feature by using the Norecovery qualifier.
Specifically, you should specify the Norecovery qualifier if
either of the following are true:
o You are restoring the database from a previous version of
Oracle Rdb.
o You need to issue more than one RMU Restore command to
completely restore the database.
For example, if you intend to restore a database by
first issuing a full RMU Restore command followed by
the application of one or more RMU Restore commands with
the Incremental or Area qualifiers, you must specify the
Norecovery qualifier on all but the last RMU Restore
command in the series you intend to issue. Allowing
Oracle RMU to attempt automatic recovery with a full
restore operation when you intend to apply additional
incremental, by-area, or by-page backup files can result
in a corrupt database.
RMU Restore does not attempt automatic .aij file recovery if any
of the following conditions are true:
o The database has been converted since the time you created the
backup file that you are attempting to restore.
o The first .aij file is not available (perhaps because it has
been backed up).
o After-image journaling was disabled when the backup operation
was performed.
o After-image journaling was disabled when the database (or
portion of it) was lost.
o You specify the Aij_Options, After_Journal, or Duplicate
qualifier with the RMU Restore command.
If RMU Restore attempts automatic .aij file recovery but fails,
you can still recover your database by using the RMU Recover
command if the restore operation was successful.
NOTE
Using the DCL COPY command with a multifile database
(assuming the files are copied to a new location) will
result in an unsupported, unusable database. This happens
because the DCL COPY command cannot update the full file
specification pointers (stored in the database root file) to
the other database files (.rda, .snp, and optional .aij).
You can rename or move the files that comprise a multifile
Oracle Rdb database by using one of the following commands:
o The RMU Backup and RMU Restore commands
o The SQL EXPORT and IMPORT statements
o The RMU Move_Area command
o The RMU Copy_Database command
By default, RMU Restore integrates the metadata stored in the
database root (.rdb) file with the data dictionary copy of the
metadata (assuming the data dictionary is installed on your
system). However, you can prevent dictionary integration by
specifying the Nocdd_Integrate qualifier.
When you specify the Incremental or Area qualifiers, do not
specify the following additional qualifiers:
Directory
Nodes_Max
New_Version
Nonew_Version
Users_Max
The RMU Restore command ignores the Confirm qualifier if you
omit the Incremental qualifier. Also, you must specify the Root
qualifier when you restore an incremental backup file to a new
version of the database, renamed database, or a restored database
in a new location.
See the Usage Notes subentry for information on restoring a
database from tape.
28.2 – Format
(B)0[mRMU/Restore backup-file-spec [storage-area-name[,...]]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Acl x /Acl
/Active_IO=max-reads x /Active_IO=3
/[No]After_Journal=file-spec x See description
/[No]Aij_Options=journal-opts x See description
/Area x See description
/[No]Cdd_Integrate x /Cdd_Integrate
/Close_Wait=n x See description
/[No]Commit x /Commit
/[No]Confirm x See description
/Directory=directory-spec x See description
/Disk_File[=(Reader_Threads=n)] x /Disk_File=(Reader_Threads=1)
/[No]Duplicate x /Noduplicate
/Encrypt=({Value=|Name=}[,Algorithm=]) x See description
/Global_Buffers=global-buffer-options x Current value
/Incremental x Full restore
/Journal=file-name x See description
/Just_Corrupt x See description
/Label=(label-name-list) x See description
(B)0[m/Librarian[=options] x None
/Loader_Synchronization x See description
/Local_Buffers=local-buffer-options x Current value
/[No]Log[=Brief|Full] x Current DCL verify value
/Master x See description
/[No]Media_Loader x See Description
/[No]New_Version x /Nonew_Version
/Nodes_Max=number-cluster-nodes x See description
/[No]Online x /Noonline
/Open_Mode={Automatic|Manual} x Current value
/Options=file-spec x None
/Page_Buffers=number-buffers x /Page_Buffers=3
/Path=cdd-path x Existing value
/Prompt={Automatic|Operator|Client} x See description
/[No]Recovery[=Aij_Buffers=n] x See description
/[No]Rewind x /Norewind
/Root=root-file-spec x Existing value
/Transaction_Mode=(mode-list) x /Transaction_Mode=Current
/Users_max=number-users x Existing value
/Volumes=n x /Volumes=1
(B)0[m[4mFile[m [4mor[m [4mArea[m [4mQualifiers[m x [4mDefaults[m
x
/Blocks_Per_Page=integer x See description
/Extension= {Disable|Enable} x Current value
/File=file-spec x See description
/Just_Corrupt x See description
/Read_Only x Current value
/Read_Write x Current value
/Snapshot=(Allocation=n,File=file-spec) x See description
/[No]Spams x Current value
/Thresholds=(val1[,val2[,val3]]) x Current value
28.3 – Parameters
28.3.1 – backup-file-spec
A file specification for the backup file produced by a previous
RMU Backup command. Note that you cannot perform a remote restore
operation on an .rbf file that has been backed up to tape and
then copied to disk.
The default file extension is .rbf.
Depending on whether you are performing a restore operation
from magnetic tape, disk, or multiple disks, the backup file
specification should be specified as follows:
o To restore from magnetic tape:
If you used multiple tape drives to create the backup file,
the backup-file-spec parameter must be provided with (and only
with) the first tape drive name. Additional tape drive names
must be separated from the first and subsequent tape drive
names with commas, as shown in the following example:
$ RMU/RESTORE /REWIND $111$MUA0:PERS_FULL_NOV30.RBF,$112$MUA1:
o To restore from single or multiple disk files:
If you used multiple disk files to create the backup file,
the backup-file-spec parameter must be provided with (and only
with) the first disk device name. Additional disk device names
must be separated from the first and subsequent disk device
names with commas. You must also be sure to include the Disk_
File qualifier. For example:
$ RMU/RESTORE/DISK_FILE DISK1:[DIR1]MFP.RBF,DISK2:[DIR2],DISK3:[DIR3]
As an alternative to listing the disk device names on the
command line (which, if you use several devices, can exceed
the line-limit length for a command line), you can specify an
options file in place of the backup-file-spec. For example:
$ RMU/RESTORE/DISK_FILE "@DEVICES.OPT"
The contents of devices.opt might appear as follows:
DISK1:[DIR1]MFP.RBF
DISK2:[DIR2]
DISK3:[DIR3]
The backup files referenced from such an options file are:
DISK1:[DIR1]MFP.RBF
DISK2:[DIR2]MFP01.RBF
DISK3:[DIR3]MFP02.RBF
28.3.2 – storage-area-name
storage-area-name[,...]
A storage area name from the database. This parameter is
optional. Use it in the following situations:
o When you want to change the values for thresholds or blocks
per page.
o When you want to change the names specified with the Snapshot
or the File qualifier for the restored database.
o If you want to restore only selected storage areas from your
backup file, you must use the Area qualifier and specify the
names of the storage areas you want to restore in either the
storage-area-name parameter in the RMU Restore command line,
or in the file specified with the Options qualifier.
To use this option, specify the storage area name rather than
the file specification for the storage area.
By using the RMU Backup and RMU Restore commands, you can back up
and restore selected storage areas of your database. This Oracle
RMU backup and restore by-area feature is designed to:
o Speed recovery when corruption occurs in some (not all) of the
storage areas of your database.
o Reduce the time needed to perform backup operations because
some data (data in read-only storage areas, for example) does
not need to be backed up with every backup operation performed
on the database.
If you plan to use the RMU Backup and RMU Restore commands to
back up and restore only selected storage areas for a database,
you must perform full and complete backup operations on the
database at regular intervals. A full and complete backup is a
full backup (not an incremental backup) operation on all the
storage areas in the database. If the database root (.rdb) file
is corrupted, you can only recover storage areas up to (but not
past) the date of the last full and complete backup operation.
Therefore, Oracle Corporation recommends that you perform full
and complete backup operations regularly.
If you plan to back up and restore only selected storage areas
for a database, Oracle Corporation strongly recommends that you
enable after-image journaling for the database (in addition to
performing the full and complete backup operation on the database
as described earlier). That is, if you are not backing up and
restoring all the storage areas in your database, you should have
after-image journaling enabled. This ensures that you can recover
all the storage areas in your database in the event of a system
failure. If you do not have after-image journaling enabled and
one or more of the areas restored by RMU Restore are not current
with the storage areas not restored, Oracle Rdb will not allow
any transactions to use the storage areas that are not current
in the restored database. In this situation, you can return to a
working database by restoring the database, using the backup file
from the last full and complete backup operation on the database
storage areas. However, any changes made to the database since
the last full and complete backup operation was performed are not
recoverable.
If you have after-image journaling enabled, use the RMU Recover
command to apply transactions from the .aij file to storage areas
that are not current after the RMU Restore command completes.
When the RMU Recover command completes, your database will be
consistent and usable.
28.4 – Command Qualifiers
28.4.1 – Acl
Acl
Noacl
Allows you to specify whether to restore the root file access
control list (ACL) that was backed up.
If you specify the Acl qualifier, the root file ACL that was
backed up is restored with the database. If the root file ACL
was not backed up and you specify the Acl qualifier with the RMU
Restore command, then RMU Restore restores the database without a
root file ACL.
If you specify the Noacl qualifier, the root file ACL is not
restored with the database.
The default is the Acl qualifier.
28.4.2 – Active IO
Active_IO=max-reads
Specifies the maximum number of read operations from the backup
file that RMU Restore attempts simultaneously. The value of the
Active_IO qualifier can range from 1 to 5. The default value is
3. Values larger than 3 might improve performance with multiple
tape drives.
28.4.3 – After Journal
After_Journal=file-spec
Noafter_Journal
NOTE
This qualifier is maintained for compatibility with versions
of Oracle Rdb prior to Version 6.0. You might find it more
useful to specify the Aij_Options qualifier, unless you are
interested in creating an extensible .aij file only. (An
extensible .aij file is one that is extended by a specified
amount when it reaches a certain threshold of fullness-
assuming there is sufficient space on the disk where it
resides.)
Specifies how RMU Restore is to handle after-image journaling and
.aij file creation, using the following rules:
o If you specify the After_Journal qualifier and provide a file
specification, the RMU process creates a new extensible .aij
file and enables journaling.
o If you specify the After_Journal qualifier but you do not
provide a file specification, RMU Restore creates a new
extensible .aij file with the same name as the journal that
was active at the time of the backup operation.
o If you specify the Noafter_Journal qualifier, RMU Restore
disables after-image journaling and does not create a new .aij
file. Note that if you specify the Noafter_Journal qualifier
there will be a gap in the sequence of the .aij files. For
example, suppose your database has .aij file sequence number 1
when you back it up. If you issue an RMU Restore command with
the Noafter_Journal qualifier, the .aij file sequence number
will be changed to 2. This means that you cannot (and do not
want to) apply the original .aij file to the restored database
(doing so would result in a sequence mismatch).
o If you do not specify an After_Journal, Noafter_Journal, Aij_
Options, or Noaij_Options qualifier, RMU Restore recovers the
journal state (enabled or disabled) and tries to reuse the
.aij file or files. (See the Description help entry under this
command for details on when automatic .aij file recovery is
not attempted.)
When you specify an .aij file name, you should specify a new
device and directory for the .aij file. If you do not specify a
device and directory, you receive a warning message. To protect
yourself against media failures, put the .aij file on a different
device from that of your database files.
If the original database is lost or corrupted but the journal
files are unaffected, you would typically restore the database
without the use of either the Aij_Options or the After_Journal
qualifier.
The After_Journal qualifier conflicts with the Area and
Incremental qualifiers; you cannot specify the After_Journal
qualifier and either of these two other qualifiers in the same
RMU Restore command line.
You cannot use the After_Journal qualifier to create fixed-size
.aij files; use the Aij_Options qualifier.
28.4.4 – Aij Options
Aij_Options=journal-opts
Noaij_Options
Specifies how RMU Restore is to handle after-image journaling and
.aij file creation, using the following rules:
o If you specify the Aij_Options qualifier and provide a
journal-opts file, RMU Restore creates the .aij file or files
you specify for the restored database. If only one .aij file
is created for the restored database, it will be an extensible
.aij file. If two or more .aij files are created for the
restored database, they will be fixed-size .aij files (as long
as at least two .aij files are always available). Depending on
what is specified in the options file, after-image journaling
can either be disabled or enabled.
o If you specify the Aij_Options qualifier, but do not provide
a journal-opts file, RMU Restore disables journaling and does
not create any new .aij files.
o If you specify the Noaij_Options qualifier, RMU Restore
reuses the original .aij file configuration and recovers the
journaling state (enabled or disabled) from the backed-up .aij
file.
o If you do not specify an After_Journal, Noafter_Journal, Aij_
Options, or Noaij_Options qualifier, RMU Restore recovers the
journaling state (enabled or disabled) and tries to reuse the
.aij file or files. (This is the same as specifying the Noaij_
Options qualifier.)
See the Description help entry under this command for details
on when automatic .aij file recovery is not attempted.
The Aij_Options qualifier conflicts with the Area and Incremental
qualifiers; you cannot specify the Aij_Options qualifier and
either of these two other qualifiers in the same RMU Restore
command line.
If the original database is lost or corrupted but the journal
files are unaffected, you would typically restore the database
without the use of either the Aij_Options or the After_Journal
qualifier.
See Show After_Journal for information on the format of a
journal-opts-file.
28.4.5 – Area
Area
Specifies that only the storage areas listed in the storage-area-
name parameter on the command line or in the Options file are
to be restored. You can use this qualifier to simplify physical
restructuring of a large database.
By default, the Area qualifier is not specified. When the Area
qualifier is not specified, all the storage areas and the
database root (.rdb) file are restored. Therefore, if you want
to restore all the storage areas, omit the Area qualifier. If
you specify the Area qualifier, a valid database root must exist.
(First issue the RMU Restore Only Root command with a full backup
file to create a valid database if one does not exist.)
By using the RMU Backup and RMU Restore commands, you can back up
and restore selected storage areas of your database. This Oracle
RMU backup- and restore-by-area feature is designed to:
o Speed recovery when corruption occurs in some (not all) of the
storage areas of your database.
o Reduce the time needed to perform backup operations because
some data (data in read-only storage areas, for example) does
not need to be backed up with every backup operation performed
on the database.
NOTE
When you perform a by-area restore operation, an area may
be marked as inconsistent; that is, the area may not be at
the same transaction state as the database root when the
restore operation completes. This may happen, for example,
when automatic aij recovery is disabled with the Norecovery
qualifier, or if automatic recovery fails. You can check
to see if an area is consistent by using the RMU Show
Corrupt_Pages command. If you find that one or more areas
are inconsistent, use the RMU Recover command to apply the
.aij files. If the .aij files are not available, refer to
the section on Clearing an Inconsistent Flag in the Oracle
Rdb Guide to Database Maintenance for information on the
implications of setting a corrupt area to consistent. Then
refer to Set Corrupt_Pages for information on using the Set
Corrupt_Pages command to clear the inconsistent flag.
If you attempt to restore a database area that is not in the
backup file, you receive an error message and, typically, the
database will be inconsistent or unusable until the affected area
is properly restored.
In the following example, the DEPARTMENTS storage area is
excluded from the backup operation; therefore, a warning message
is displayed when the attempt is made to restore DEPARTMENTS,
which is not in the backup file. Note that when this restore
operation is attempted on a usable database, it completes, but
the DEPARTMENTS storage area is now inconsistent.
$ RMU/BACKUP /EXCLUDE=DEPARTMENTS MF_PERSONNEL.RDB -
_$ PERS_BACKUP5JAN88.RBF
$ RMU/RESTORE /NEW_VERSION /AREA PERS_BACKUP5JAN88.RBF DEPARTMENTS
%RMU-W-AREAEXCL, The backup does not contain the storage
area - DEPARTMENTS
If you create a backup file by using the RMU Backup command and
the Exclude qualifier, it is your responsibility to ensure that
all areas of a database are restored and recovered when you
use the RMU Restore and RMU Recover commands to duplicate the
database.
The Area qualifier conflicts with the After_Journal and Aij_
Options qualifiers.
28.4.6 – Cdd Integrate
Cdd_Integrate
Nocdd_Integrate
Integrates the metadata from the database root (.rdb) file into
the data dictionary (assuming the data dictionary is installed on
your system).
If you specify the Nocdd_Integrate qualifier, no integration
occurs during the restore operation.
You might want to delay integration of the database metadata with
the data dictionary until after the restore operation finishes
successfully.
You can use the Nocdd_Integrate qualifier even if the DICTIONARY
IS REQUIRED clause was used when the database was defined.
The Cdd_Integrate qualifier integrates definitions in one
direction only-from the database file to the dictionary. The
Cdd_Integrate qualifier does not integrate definitions from the
dictionary to the database file.
28.4.7 – Close Wait
Close_Wait=n
Specifies a wait time of n minutes before RMU Restore
automatically closes the database. You must supply a value for
n.
In order to use this qualifier, the Open_Mode qualifier on the
RMU Restore command line must be set to Automatic.
28.4.8 – Commit
Commit
NoCommit
Instructs Oracle RMU to commit the converted database to the
current version of Oracle Rdb before completing the restore
operation. Use this qualifier only when the backup file being
restored is from a previous version of Oracle Rdb. The conversion
is permanent and the database cannot be returned to the previous
version. The NoCommit qualifier instructs Oracle RMU not to
commit the converted database. In this case, you can rollback the
database to its original version using the RMU Convert command
with the Rollback qualifier, or you can permanently commit it to
the current version by issuing the RMU Convert command with the
Commit qualifier. It is important to either Commit or Rollback
the conversion after you have verified that the conversion
was successful otherwise unnecessary space is taken up in the
database to store the obsolete alternate version of the metadata.
(RMU will not let you convert to a newer version if the previous
Convert was never committed, even if it was years ago.)
The Commit qualifier is the default.
28.4.9 – Confirm
Confirm
Noconfirm
Specifies that RMU Restore notify you of the name of the database
on which you are performing the incremental restore operation.
You can thus be sure that you have specified the correct .rdb
file name to which the incremental backup file will be applied.
This is the default for interactive processing.
Confirmation is especially important on an incremental restore
operation if you have changed the .rdb file name or created a new
version of the database during a restore operation from the full
backup file. (You must specify the Root qualifier also to create
new version or change the .rdb file name.)
Specify the Noconfirm qualifier to have RMU Restore apply the
incremental backup file to the database without prompting for
confirmation. This is the default for batch processing.
RMU Restore ignores the Confirm and Noconfirm qualifiers unless
you use the Incremental qualifier.
28.4.10 – Directory
Directory=directory-spec
Specifies the default destination for the restored database
files. If you specify a file name or file extension, all restored
files are given that file name or file extension. There is no
default directory specification for this qualifier. If you do not
specify the Directory qualifier, RMU Restore attempts to restore
all the database files to the directories they were in at the
time the backup file was created; if those directories no longer
exist, the restore operation fails.
See the Usage Notes for information on how this qualifier
interacts with the Root and File qualifiers and for warnings
regarding restoring database files into a directory owned by a
resource identifier.
28.4.11 – Disk File
Disk_File[=(Reader_Threads=integer)]
Specifies that you want to perform a multithreaded restore
operation from disk files, floppy disks, or other disks external
to the PC. This qualifier must have been specified on the RMU
Backup command when the backup files from which you are restoring
were created.
The Reader_Threads keyword specifies the number of threads that
Oracle RMU should use when performing a multithreaded restore
operation from disk files. You can specify no more than one
reader thread per device specified on the command line (or in the
command parameter options file). By default, one reader thread is
used.
This qualifier and all qualifiers that control tape operations
(Label, Loader_Synchronization, Master, Media_Loader, and Rewind)
are mutually exclusive.
28.4.12 – Duplicate
Duplicate
Noduplicate
Specifies a new database with the same content but different
identity from that of the original database. The default is the
Noduplicate qualifier.
The Duplicate qualifier creates a copy of your database that is
not expected to remain in sequence with the original database.
Note that you cannot interchange after-image journal (.aij) files
between the original and duplicate copy of the database because
each database is unique.
You can create a duplicate database when you use the Duplicate
qualifier or create the original database again when you use the
Noduplicate qualifier.
The Duplicate qualifier conflicts with the Incremental, Area, and
Online qualifiers.
28.4.13 – Encrypt
Encrypt=({Value=|Name=}[,Algorithm=])
The Encrypt qualifier decrypts the save set file of a database
backup.
Specify a key value as a string or, the name of a predefined
key. If no algorithm name is specified the default is DESCBC.
For details on the Value, Name and Algorithm parameters see HELP
ENCRYPT.
This feature requires the OpenVMS Encrypt product to be installed
and licensed on this system.
28.4.14 – Global Buffers
Global_Buffers=global-buffer-options
Allows you to change the default global buffer parameters when
you restore a database. The following options are available:
o Disabled
Use this option to disable global buffering for the database
being restored.
o Enabled
Use this option to enable global buffering for the database
being restored. You cannot specify both the Global_
Buffers=Disabled and Global_Buffers=Enabled qualifiers in
the same RMU Restore command.
o Total=total-buffers
Use this option to specify the number of buffers available for
all users. The minimum value you can specify is 2; the maximum
value you can specify is the global buffer count stored in the
.rdb file.
o User_Limit=buffers-per-user
Use this option to specify the maximum number of buffers
available to each user.
If you do not specify a Global_Buffers qualifier, the database
is restored with the values that were in effect when the database
was backed up.
When you specify two or more options with the Global_Buffers
qualifier, use a comma to separate each option and enclose the
list of options within parentheses.
28.4.15 – Incremental
The Incremental qualifier restores a database from an incremental
backup file.
Use the Incremental qualifier only when you have first issued an
RMU Restore command that names the full backup file that was the
basis for this incremental backup file. Each incremental backup
file is tied to a particular full backup file.
After restoring both the full and the incremental backup files,
you have restored the database to the condition it was in when
you performed the incremental database backup operation.
By default, RMU Restore performs a full restore operation on the
backup file.
You cannot specify the After_Journal or Just_Corrupt qualifier
with the Incremental qualifier.
28.4.16 – Journal
Journal=file-name
Allows you to specify a journal file to be used to improve tape
performance by a restore operation (including a by-area or just-
corrupt restore operation).
The backup operation creates the journal file and writes to it
a description of the backup operation. This description contains
identification of the tape drives, the tape volumes and their
contents. The Journal qualifier directs RMU Restore to read the
journal file and select only the useful tape volumes.
The journal file must be the one created at the time the backup
operation was performed. If the wrong journal file is supplied,
RMU Restore returns an informational message and does not use the
specified journal file to select the volumes to be processed.
If you omit the Label qualifier, the restore operation creates a
list of volume labels from the contents of the journal file.
A by-area restore operation also constructs a list of useful
tape volume labels from the journal file; only those volumes are
mounted and processed.
28.4.17 – Label
Label=(label-name-list)
Specifies the 1- to 6-character string with which the volumes
of the backup file have been labeled. The Label qualifier is
applicable only to tape volumes. You must specify one or more
label names when you use the Label qualifier.
You can specify a list of tape labels for multiple tapes. If you
list multiple tape label names, separate the names with commas,
and enclose the list of names within parentheses.
In a normal restore operation, the Label qualifier you specify
with the RMU Restore command should be the same Label qualifier
you specified with the RMU Backup command that backed up your
database.
You can use the Label qualifier with indirect file references.
See the Indirect-Command-Files help entry for more information.
28.4.18 – Librarian
Librarian=options
Use the Librarian qualifier to restore files from data archiving
software applications that support the Oracle Media Management
interface. The file name specified on the command line identifies
the stream of data to be retrieved from the Librarian utility. If
you supply a device specification or a version number it will be
ignored.
Oracle RMU supports retrieval using the Librarian qualifier only
for data that has been previously stored by Oracle RMU using the
Librarian qualifer.
The Librarian qualifier accepts the following options:
o Reader_Threads=n
Use the Reader_Threads option to specify the number of backup
data streams to read from the Librarian utility. The value of
n can be from 1 to 99. The default is one reader thread. The
streams are named BACKUP_FILENAME.EXT, BACKUP_FILENAME.EXT02,
BACKUP_FILENAME.EXT03, up to BACKUP_FILENAME.EXT99. BACKUP_
FILENAME.EXT is the backup file name specified in the RMU
Backup command.
The number of reader threads specified for a database restore
from the Librarian utility should be equal to or less than the
number of writer threads specified for the database backup.
If the number of reader threads exceeds the number of writer
threads, the number of reader threads is set by Oracle RMU
to be equal to the number of data streams actually stored
in the Librarian utility by the backup. If the number of
reader threads specified for the restore is less than the
number of writer threads specified for the backup, Oracle RMU
will partition the data streams among the specified reader
threads so that all data streams representing the database are
restored.
The Volumes qualifier cannot be used with the Librarian
qualifer. Oracle RMU sets the volume number to be the actual
number of data streams stored in the specified Librarian
utility.
o Trace_file=file-specification
The Librarian utility writes trace data to the specified file.
o Level_Trace=n
Use this option as a debugging tool to specify the level of
trace data written by the Librarian utility. You can use a
pre-determined value of 0, 1, or 2, or a higher value defined
by the Librarian utility. The pre-determined values are :
- Level 0 traces all error conditions. This is the default.
- Level 1 traces the entry and exit from each Librarian
function.
- Level 2 traces the entry and exit from each Librarian
function, the value of all function parameters, and the
first 32 bytes of each read/write buffer, in hexadecimal.
o Logical_Names=(logical_name=equivalence-value,...)
You can use this option to specify a list of process logical
names that the Librarian utility can use to specify catalogs
or archives where Oracle Rdb backup files are stored,
Librarian debug logical names, and so on. See the specific
Librarian documentation for the definition of logical names.
The list of process logical names is defined by Oracle RMU
prior to the start of any Oracle RMU command that accesses the
Librarian application.
The following OpenVMS logical names must be defined for use with
a Librarian utility before you execute an Oracle RMU backup or
restore operation. Do not use the Logical_Names option provided
with the Librarian qualifier to define these logical names.
o RMU$LIBRARIAN_PATH
This logical name must be defined so that the shareable
Librarian image can be loaded and called by Oracle RMU backup
and restore operations. The translation must include the file
type (for example, .exe), and must not include a version
number. The shareable Librarian image must be an installed
(known) image. See the Librarian utility documentation for
the name and location of this image and how it should be
installed. For a parallel RMU backup, define RMU$LIBRARIAN_
PATH as a system-wide logical name so that the multiple
processes created by a parallel backup can all translate the
logical.
$ DEFINE /SYSTEM /EXECUTIVE_MODE -
_$ RMU$LIBRARIAN_PATH librarian_shareable_image.exe
o RMU$DEBUG_SBT
This logical name is not required. If it is defined, Oracle
RMU will display debug tracing information messages from
modules that make calls to the Librarian shareable image.
For a parallel RMU backup, the RMU$DEBUG_SBT logical should
be defined as a system logical so that the multiple processes
created by a parallel backup can all translate the logical.
The following lines are from a backup plan file created by the
RMU Backup/Parallel/Librarian command:
Backup File = MF_PERSONNEL.RBF
Style = Librarian
Librarian_trace_level = #
Librarian_logical_names = (-
logical_name_1=equivalence_value_1, -
logical_name_2=equivalence_value_2)
Writer_threads = #
The "Style = Librarian" entry specifies that the backup is going
to a Librarian utility. The "Librarian_logical_names" entry is
a list of logical names and their equivalence values. This is an
optional parameter provided so that any logical names used by a
particular Librarian utility can be defined as process logical
names before the backup or restore operation begins. For example,
some Librarian utilities provide support for logical names for
specifying catalogs or debugging.
You cannot use device specific qualifiers such as Rewind,
Density, or Label with the Librarian qualifier because the
Librarian utility handles the storage meda, not Oracle RMU.
28.4.19 – Loader Synchronization
Loader_Synchronization
Allows you to preload tapes in order to minimize the need for
operator support. When you specify the Loader_Synchronization
qualifier and specify multiple tape drives, the restore operation
reads from the first set of tape volumes concurrently, then waits
until all concurrent tape operations conclude before assigning
the next set of tape volumes. This ensures that the tapes can be
loaded into the loaders or stackers in the order required by the
restore operation.
The Loader_Synchronization qualifier does result in reduced
performance. For maximal performance, no drive should remain
idle, and the next identified volume should be placed on the
first drive that becomes idle. However, because the order in
which the drives become idle depends on many uncontrollable
factors and cannot be predetermined, the drives cannot be
preloaded with tapes.
Because the cost of using the Loader_Synchronization qualifier is
dependent on the hardware configuration and the system load, the
cost is unpredictable. A 5% to 20% additional elapsed time for
the operation is typical. You must determine whether the benefit
of a lower level of operator support compensates for the loss of
performance. The Loader_Synchronization qualifier is most useful
for large restore operations.
The Loader_Synchronization qualifier has no effect unless you
specify the Volumes qualifier also.
28.4.20 – Local Buffers
Local_Buffers=local-buffer-options
Allows you to change the default local buffer parameters when you
restore a database. The following options are available:
o Number=number-buffers
Use this option to specify the number of local buffers
available for all users. You must specify a number between
2 and 32,767 for the number-buffers parameter.
o Size=buffer-blocks
The size (in blocks) for each buffer. You must specify a
number between 2 and 64 for the buffer-blocks parameter.
If you specify a value smaller than the size of the largest
page defined, RMU Restore automatically adjusts the size of
the buffer to hold the largest page defined. For example, if
you specify the Local_Buffers=Size=8 qualifier and the largest
page size for the storage areas in your database is 64 blocks,
RMU Restore automatically interprets the Local_Buffers=Size=8
qualifier as though it were a Local_Buffers=Size=64 qualifier.
The value you specify for the Size option determines the
number of blocks for each buffer, regardless of whether local
buffering or global buffering is enabled for the database.
If you do not specify a Local_Buffers qualifier, the database is
restored with the values that were in effect when the database
was backed up.
28.4.21 – Log
Log
Log=Brief
Log=Full
Nolog
Specifies whether the processing of the command is reported
to SYS$OUTPUT. Specify the Log qualifier to request that the
progress of the restore operation be written to SYS$OUTPUT,
or the Nolog qualifier to suppress this report. If you specify
the Log=Brief option, which is the default if you use the Log
option without a qualifier, the log contains the start and
completion time of each storage area. If you specify the Log=Full
option, the log also contains thread assignment and storage area
statistics messages.
If you do not specify the Log or the Nolog qualifier, the default
is the current setting of the DCL verify switch. (The DCL SET
VERIFY command controls the DCL verify switch.)
28.4.22 – Master
Master
Allows you to explicitly state how drives should be used when
they are to be accessed concurrently. This is a positional
qualifier that designates a tape drive as a master tape drive.
When the Master qualifier is used, it must be used on the first
drive specified. All additional drives become slaves to that
master until the end of the command line, or until the next
Master qualifier, whichever comes first.
If the Master qualifier is used on a drive that does not have
an independent I/O path (not a hardware master), performance
decreases.
If the Master qualifier is not used, and concurrent tape access
is requested (using the Volumes=n qualifier), RMU Restore uses
the same automatic configuration procedure it employs with the
backup operation to select the master drives.
Using the Master qualifier is an error if you do not specify
concurrent tape access (you do not specify the Volumes=n
qualifier). See the description of the Volumes qualifier for
further information on specifying concurrent tape access.
28.4.23 – Media Loader
Media_Loader
Nomedia_Loader
Use the Media_Loader qualifier to specify that the tape device
from which RMU Restore is reading the backup file has a loader
or stacker. Use the Nomedia_Loader qualifier to specify that the
tape device does not have a loader or stacker.
By default, if a tape device has a loader or stacker, RMU Restore
should recognize this fact. However, occasionally RMU Restore
does not recognize that a tape device has a loader or stacker.
Therefore, after reading the first tape, RMU Restore issues a
request to the operator for the next tape, instead of requesting
the next tape from the loader or stacker. Similarly, sometimes
RMU Restore behaves as though a tape device has a loader or
stacker when actually it does not.
If you find that RMU Restore is not recognizing that your
tape device has a loader or stacker, specify the Media_Loader
qualifier. If you find that RMU Restore expects a loader or
stacker when it should not, specify the Nomedia_Loader qualifier.
28.4.24 – New Version
New_Version
Nonew_Version
Specifies whether new versions of database files should be
produced if the destination device and directory contain a
previous version of the database files.
If you use the New_Version qualifier, the new database file
versions are produced. The New_Version qualifier conflicts with
the Incremental qualifier.
If you use the Nonew_Version qualifier, the default, an error
occurs if an old copy exists of any of the database files being
restored.
A restore operation that creates a new database root (.rdb) file
must always either disable after-image journaling or create a
new .aij file. Attempting to use a pre-existing .aij file with a
restored database corrupts the journal and makes future recovery
from .aij files impossible. The New_Version qualifier cannot and
does not apply to the .aij file.
28.4.25 – Nodes Max
Nodes_Max=number-cluster-nodes
Specifies a new upper limit on the number of VMScluster nodes
from which users can access the restored database. The Nodes_Max
qualifier accepts values between 1 and 96 VMScluster nodes. The
actual maximum is the highest number of VMScluster nodes possible
in the current version of OpenVMS. The default value is the limit
defined for the database before it was backed up.
You cannot specify the Nodes_Max qualifier if you use the
Incremental or Area qualifier.
28.4.26 – Online
Online
Noonline
Specifies that the restore operation be performed while other
users are attached to the database. You can specify the online
qualifier only with the Area or Just_Corrupt qualifier. The pages
to be restored are locked for exclusive access, so the operation
is not compatible with any other use of the data in the specified
pages.
The default is the Noonline qualifier.
28.4.27 – Open Mode
Open_Mode=Automatic
Open_Mode=Manual
Allows you to change the mode for opening a database when you
restore that database. When you specify Open_Mode=Automatic,
users can invoke the database immediately after it is restored.
If you specify Open_Mode=Manual, an RMU Open command must be used
to open the database before users can invoke the database.
The Open_Mode qualifier also specifies the mode for closing a
database. If you specify Open_Mode=Automatic, you can also use
the Close_Wait qualifier to specify a time in minutes before the
database is automatically closed.
If you do not specify the Open_Mode qualifier, the database is
restored with the open mode of the database that was in effect
when the database was backed up.
28.4.28 – Options
Options=file-spec
Specifies the options file that contains storage area names,
followed by the storage area qualifiers that you want applied to
that storage area.
You can direct RMU Restore to create an options file for use
with this qualifier by specifying the Restore_Options qualifier
with the RMU Backup, RMU Dump, and RMU Dump Backup commands. See
Backup Database, Dump Database, and Dump Backup_File for details.
If you create your own options file, do not separate the storage
area names with commas. Instead, put each storage area name on a
separate line in the file. You can include any or all of the area
qualifiers in the options file. (See the format help entry under
this command for the list of Area qualifiers.) You can use the
DCL line continuation character, a hyphen (-), or the comment
character (!) in the options file. The default file extension is
.opt.
28.4.29 – Page Buffers
Page_Buffers=number-buffers
Specifies the maximum number of buffers Oracle Rdb uses during
the RMU Restore operation while the database files are being
created. The value of the Page_Buffers qualifier can range from
1 to 5. The default is 3 buffers. Values larger than 3 might
improve performance, especially during incremental restore
operations.
When RMU Restore enters the stage of reconstructing internal
structures at the end of the restore operation, a high value
for the Page_Buffers qualifier can be useful for very large
databases. However, the cost of using these extra buffers is
that memory use is high. Thus, the trade-off during a restore
operation is memory use against performance.
28.4.30 – Path
Path=cdd-path
Specifies a data dictionary path into which the database
definitions be integrated. If you do not specify the Path
qualifier, RMU Restore uses the CDD$DEFAULT logical name value
of the user who entered the RMU Restore command.
If you specify a relative path name, Oracle Rdb appends the
relative path name you enter to the CDD$DEFAULT value. If the
cdd-path parameter contains nonalphanumeric characters, you must
enclose it within quotation marks ("").
Oracle Rdb ignores the Path qualifier if you use the Nocdd_
Integrate qualifier or if the data dictionary is not installed
on your system.
28.4.31 – Prompt
Prompt=Automatic
Prompt=Operator
Prompt=Client
Specifies where server prompts are to be sent. When you specify
Prompt=Automatic, prompts are sent to the standard input device,
and when you specify Prompt=Operator, prompts are sent to the
server console. When you specify Prompt=Client, prompts are sent
to the client system.
28.4.32 – Recovery
Recovery[=Aij_Buffers=n]
Norecovery
The Recovery=Aij_Buffers=n qualifier allows you to specify the
number of recovery buffers to use during an automatic recovery.
The default value of n is 100 recovery buffers.
The Recovery qualifier explicitly specifies that RMU Restore
should attempt an automatic recovery of the .aij files during the
restore operation.
Specify either the Recover=Aij_Buffers=n qualifier and the
Recovery qualifier only if .aij files are being retained. If
you specify either qualifier in a situation where .aij files
are not retained (the Aij_Options, After_Journal, or Duplicate
qualifier has been specified), a warning message is displayed and
RMU Restore performs the restore operation without attempting to
recover the .aij files.
The Norecovery qualifier specifies that RMU Restore should not
attempt an automatic recovery of the .aij files during the
restore operation. Specify this qualifier if you want to use
the RMU Recover command with the Until qualifier or if you intend
to perform an incremental restore operation.
28.4.33 – Rewind
Rewind
Norewind
Specifies that the tape that contains the backup file will be
rewound before processing begins. The Norewind qualifier, the
default, causes the search for the backup file to begin at the
current tape position.
The Rewind and Norewind qualifiers are applicable only to tape
devices. RMU Restore returns an error message if you use these
qualifiers and the target device is not a tape device.
28.4.34 – Root
Root=root-file-spec
Specifies the database root (.rdb) file specification of the
restored database. See the Usage Notes for information on how
this qualifier interacts with the Directory, File, and Snapshot
qualifiers and for warnings regarding restoring database files
into a directory owned by a resource identifier.
The Root qualifier is only meaningful when used with a multifile
database.
28.4.35 – Transaction Mode
Transaction_Mode=(mode-list)
Sets the allowable transaction modes for the database root
file restored by the restore operation. The primary use of
this qualifier is when you restore a backup file (of a master
database) to create a Hot Standby database. Because only read-
only transactions are allowed on a standby database, you should
use the Transaction_Mode=Read_Only qualifier setting. This
setting prevents modifications to the standby database at all
times, even when replication operations are not active. For more
information on Hot Standby see the Oracle Rdb7 and Oracle CODASYL
DBMS: Guide to Hot Standby Databases. The mode-list can include
one or more of the following transaction modes:
o All - Enables all transaction modes
o Current - Enables all transaction modes that are set for the
source database. This is the default transaction mode.
o None - Disables all transaction modes
o [No]Batch_Update
o [No]Read_Only
o [No]Exclusive
o [No]Exclusive_Read
o [No]Exclusive_Write
o [No]Protected
o [No]Protected_Read
o [No]Protected_Write
o [No]Read_Write
o [No]Shared
o [No]Shared_Read
o [No]Shared_Write
Your restore operation must include the database root file.
Otherwise, RMU Restore returns the CONFLSWIT error when you issue
an RMU Restore command with the Transaction_Mode qualifier.
If you specify more than one transaction mode in the mode-list,
enclose the list in parenthesis and separate the transaction
modes from one another with a comma. Note the following:
o When you specify a negated transaction mode, it indicates
that a mode is not an allowable access mode. For example, if
you specify the Noexclusive_Write access mode, it indicates
that exclusive write is not an allowable access mode for the
restored database.
o If you specify the Shared, Exclusive, or Protected transaction
mode, Oracle RMU assumes you are referring to both reading and
writing in that transaction mode.
o No mode is enabled unless you add that mode to the list, or
you use the All option to enable all transaction modes.
o You can list one transaction mode that enables or disables a
particular mode followed by another that does the opposite.
For example, Transaction_Mode=(Noshared_Write, Shared) is
ambiguous because the first value disables Shared_Write access
and the second value enables Shared_Write access. Oracle
RMU resolves the ambiguity by first enabling the modes as
specified in the modes-list and then disabling the modes as
specified in the modes-list. The order of items in the list is
irrelevant. In the example presented previously, Shared_Read
is enabled and Shared_Write is disabled.
28.4.36 – Users Max
Users_Max=number-users
Specifies a new upper limit on the number of users that can
simultaneously access the restored database. The valid range is
between 1 and 2032 users. The default value is the value defined
for the database before it was backed up.
You cannot specify the Users_Max qualifier if you use the
Incremental qualifier or the Area qualifier.
28.4.37 – Volumes
Volumes = n
Allows you to specify that concurrent tape access is to be used
to accelerate the restore operation.
The Volumes qualifier indicates concurrent tape access and
specifies the number of tape volumes in the backup file. The
number of volumes must be specified accurately for the restore
operation to complete.
If you are restoring from a multidisk backup file, the value of
"n" indicates the number of disk devices containing backup files
needed for the restore operation.
If you do not specify the Volumes qualifier, the restore
operation does not use concurrent tape access.
28.4.38 – Blocks Per Page
Blocks_Per_Page=integer
Lets you restore a database with larger mixed page sizes than
existed in the original database. This creates new free space on
each page in the storage area file and does not interfere with
record clustering. RMU Restore ignores this qualifier when it
specifies an integer less than or equal to the current page size
of the area.
You might want to increase the page size in storage areas
containing hash indexes that are close to full. By increasing
the page size in such a situation, you prevent the storage area
from extending.
28.4.39 – Extension
Extension=Disable
Extension=Enable
Allows you to change the automatic file extension attribute
when you restore a database. These qualifiers are positional
qualifiers.
Use the Extension=Disable qualifier to disable automatic file
extension for a storage area.
Use the Extension=Enable qualifier to enable automatic file
extension for a storage area.
If you do not specify the Extension=Disable or Extension=Enable
qualifier, the storage areas are restored with the automatic file
extension attributes that were in effect when the database was
backed up.
28.4.40 – File
File=file-spec
Requests that the storage area to which this qualifier is applied
be restored in the specified location.
This qualifier is not valid for single-file databases. This is a
positional qualifier.
See the Usage Notes for information on how this qualifier
interacts with the Root, Directory, and Snapshot qualifiers and
for warnings regarding restoring database files into a directory
owned by a resource identifier.
28.4.41 – Just Corrupt
Just_Corrupt
This qualifier replaces the Just_Pages qualifier beginning in
Oracle Rdb V7.0.
Allows you to restore the corrupt pages and areas in the
database as recorded in the corrupt page table (CPT). The CPT
is maintained in the .rdb file. (Note that if the corrupt page
table becomes full, the area with the highest number of corrupt
pages is marked corrupt and the individual pages for that area
are removed from the CPT.)
Often, only one or a few pages in the database are corrupted
due to hardware or software faults. The Just_Corrupt qualifier
allows you to recover that database in minimal time with minimal
interference; availability of the uncorrupted data is unaffected.
It allows you to restrict the restoration to the pages (or areas)
logged as corrupt in the corrupt page table.
The Just_Corrupt qualifier is a positional qualifier. If you use
it in the global position, RMU Restore restores all the corrupt
pages and all the corrupt areas as logged in the corrupt page
table. If you use it in the local position, RMU Restore restores
only the corrupt pages (or the entire area) of the area name it
modifies.
It is possible to mix restoration of complete areas and just
corrupt pages in the same command. The following example restores
all of AREA_1 (regardless of whether or not it is corrupt), but
just the corrupt pages (logged to the CPT) in AREA_2.
$ RMU/RESTORE/AREA backup_file AREA_1, AREA_2/JUST_CORRUPT
Note that when the Just_Corrupt qualifier is used globally, all
the corrupt pages logged to the CPT for the areas specified
are restored. For example, the following command restores all
the corrupt pages logged to the CPT for AREA_1 and AREA_2.
(However, if one of the areas specified contains no corruptions,
an informational message is displayed and that area is not
restored.)
$ RMU/RESTORE/JUST_CORRUPT backup_file /AREA AREA_1, AREA_2
Restoration of corrupt pages and area can be performed on line.
Online operations lock only the corrupt pages or areas for the
duration of the restore operation. The remainder of the storage
area can be read or updated by an application. When an entire
area is restored on line, applications are locked out of the
entire area for the duration of the restore operation.
There are some restrictions on the use of the Just_Corrupt
qualifier:
o The backup file must be a full backup file that contains the
selected area.
o When space area management (SPAM) pages are restored, RMU
Restore rebuilds the SPAM page using information from the
range of data pages that the SPAM page manages.
o Area bit map (ABM) pages can be restored, but their content
is not reconstructed. If ABM pages have been corrupted,
regenerate them with the RMU Repair command.
o A by-page restore operation is like a by-area restore
operation in that after-image journal (AIJ) recovery is
required to make the restored data consistent with the rest
of the database.
Once the pages are restored, access to these restored pages is
prohibited until they are made consistent. Inconsistent pages
are stored in the corrupt page table (CPT) and have their
timestamp field flagged by Oracle Rdb.
o You can also use the Just_Corrupt qualifier in a restore
options file. However, you cannot use any of the following
qualifiers with the Just_Corrupt qualifier (neither within an
options file nor on the command line):
- Blocks_Per_Page
- Extension
- File
- Incremental
- Read_Only
- Read_Write
- Snapshot
- Spams
- Thresholds
You can use the Just_Corrupt qualifier in conjunction with
the Journal=file qualifier to greatly speed up processing of
a large tape backup file. When you use the Journal qualifier,
only those tapes containing corrupt pages, areas, or both, are
mounted and processed.
28.4.42 – Just Pages
Just_Pages[=(p1,p2,...)]
This qualifier is replaced with the Just_Corrupt qualifier
beginning in Oracle Rdb V7.0. See the description of the Just_
Corrupt qualifier.
28.4.43 – Read Only
Use the Read_Only qualifier to change a read/write storage area
or a write-once storage area to a read-only storage area.
If you do not specify the Read_Only or the Read_Write qualifier,
the storage areas are restored with the read/write attributes
that were in effect when the database was backed up.
This is a positional qualifier.
28.4.44 – Read Write
Use the Read_Write qualifier to change a read-only storage area
or a write-once storage area to a read/write storage area.
If you do not specify the Read_Only or the Read_Write qualifier,
the storage areas are restored with the read/write attributes
that were in effect when the database was backed up.
This is a positional qualifier.
28.4.45 – Snapshot
Snapshot=(Allocation=n,File=file-spec)
If you specify the Allocation parameter, specifies the snapshot
file allocation size in n pages for a restored area. If you
specify the File parameter, specifies a new snapshot file
location for the restored storage area to which it is applied.
You can specify the Allocation parameter only, the File parameter
only, or both parameters; however, if you specify the Snapshots
qualifier, you must specify at least one parameter.
This is one of the commands used to alter the parameters of the
restored database from those defined at the time of the database
backup. Others are /DIRECTORY, /ROOT and /FILE.
See the Usage Notes for information on how this qualifier
interacts with the Root, File, and Directory qualifiers.
The Shapshot qualifier is a positional qualifier. It can be used
locally or globally, depending on where the qualifier is placed
on the command line. See Examples 22 and 23.
To save read/write disk space, you can specify that less space be
allocated for the storage area's .snp file when it remains as a
read/write file on a read/write disk. If the keyword Allocation
is omitted, the original allocation is used. This qualifier is
not valid for single-file databases.
You cannot specify an .snp file name for a single-file database.
When you create an .snp file for a single-file database, Oracle
Rdb does not store the file specification of the .snp file.
Instead, it uses the file specification of the database root
(.rdb) file to determine the file specification of the .snp file.
If you want to place the .snp file on a different device or
directory, Oracle Corporation recommends that you create a
multifile database. However, you can work around the restriction
by defining a search list for a concealed logical name. (However,
do not use a nonconcealed rooted logical name to define database
files; a database created with a non-concealed rooted logical
name can be backed up, but may not restore correctly when you
attempt to restore the files to a new directory.)
To create a database with an .snp file on a different device
or directory, define a search list by using a concealed logical
name. Specify the location of the root file as the first item in
the search list. When you create the database, use the logical
name for the directory specification. Then, copy the .snp file
to the second device. The following example demonstrates the
workaround:
$ ! Define a concealed logical name.
$ DEFINE /TRANS=CONCEALED/SYSTEM TESTDB USER$DISK1:[DATABASE], -
_$ USER$DISK2:[SNAPSHOT]
$
$ SQL
SQL> -- Create the database.
SQL> --
SQL> CREATE DATABASE FILENAME TESTDB:TEST;
SQL> EXIT
$ !
$ ! Copy the snapshot (.snp) file to the second disk.
$ COPY USER$DISK1:[DATABASE]TEST.SNP -
_$ USER$DISK2:[SNAPSHOT]TEST.SNP
$ !
$ ! Delete the snapshot (.snp) file from the original disk.
$ DELETE USER$DISK1:[DATABASE]TEST.SNP;
28.4.46 – Spams
Spams
Nospams
Enables the space area management (SPAM) pages for the specified
area. The Nospams qualifier disables the SPAM pages for the
specified area. The default is to leave the attribute unchanged.
The Spams and Nospams qualifiers are not allowed for a storage
area that has a uniform page format. This is a positional
qualifier.
28.4.47 – Thresholds
Thresholds=(val1[,val2[,val3]])
Specifies a storage area's fullness percentage threshold. You
can adjust SPAM thresholds to improve future space utilization in
the storage area. Each threshold value represents a percentage of
fullness on a data page. When a data page reaches the percentage
of fullness defined by a given threshold value, the space
management entry for the data page is updated to contain that
threshold value.
The Thresholds qualifier applies only to storage areas with a
mixed page format.
If you do not use the Thresholds qualifier with the RMU Restore
command, Oracle Rdb uses the storage area's original thresholds.
This is a positional qualifier.
See the Oracle Rdb7 Guide to Database Performance and Tuning for
more information on setting SPAM thresholds.
28.5 – Usage Notes
o To use the RMU Restore command for a database, you must have
the RMU$RESTORE privilege in the root file access control
list (ACL) for the database or the OpenVMS SYSPRV or BYPASS
privilege.
o The RMU Restore command provides four qualifiers, Directory,
Root, File, and Snapshots, that allow you to specify the
target for the restored files. The target can be just a
directory, just a file name, or a directory and file name.
If you use all or some of these four qualifiers, apply them as
follows:
- Use the Root qualifier to indicate the target for the
restored database root file.
- Use local application of the File qualifier to specify the
target for the restored storage area or areas.
- Use local application of the Snapshots qualifier to specify
the target for the restored snapshot file or files.
- Use the Directory qualifier to specify a default target
directory. The default target directory is the directory
to which all files not qualified with the Root, File, or
Snapshot qualifier are restored. It is also the default
directory for files qualified with the Root, File, or
Snapshot qualifier if the target for these qualifiers does
not include a directory specification.
Note the following when using these qualifiers:
- Global application of the File qualifier when the target
specification includes a file name causes RMU Restore to
restore all of the storage areas to different versions
of the same file name. This creates a database that is
difficult to manage.
- Global application of the Snapshot qualifier when the
target specification includes a file name causes RMU
Restore to restore all of the snapshot files to different
versions of the same file name. This creates a database
that is difficult to manage.
- Specifying a file name or extension with the Directory
qualifier is permitted, but causes RMU Restore to restore
all of the files (except those specified with the File
or Root qualifier) to different versions of the same file
name. Again, this creates a database that is difficult to
manage.
See Example 17.
o When you restore a database into a directory owned by a
resource identifier, the ACE for the directory is applied
to the database root file ACL first, and then the Oracle RMU
ACE is added. This method is employed to prevent database
users from overriding OpenVMS file security. However, this can
result in a database which you consider yours, but to which
you have no Oracle RMU privileges to access. See the Oracle
Rdb Guide to Database Maintenance for details.
o If a backup file to tape is created using a single tape
device, it must be restored using a single tape device; it
cannot be restored using multiple tape devices.
NOTE
An incremental backup file created for a database running
under one version of Oracle Rdb cannot be applied if
that database has been restored under another version of
Oracle Rdb. For example, if you do the following, step 6
fails with the error message, "XVERREST, Cross version
RESTORE is not possible for by-area or incremental
functions":
1. Apply a full backup operation to a Version 7.1
database.
2. Apply updates to the database.
3. Perform an incremental backup operation on the
database.
4. Move backup files to a system running Oracle Rdb
Version 7.2.
5. Restore the database by using the full backup file.
6. Attempt to apply the incremental backup file created
in step 1.
o If you apply an incremental backup file, you must specify the
Norecovery qualifier when you issue a full RMU Restore command
for the corresponding full backup file.
o If you mistakenly attempt to restore a backup file in a
version of Oracle Rdb that is earlier than the version for
which the backup file was created, you might receive INVRECTYP
errors and your operation will probably terminate with an
access violation (ACCVIO) exception. If you receive this
error, check the version of the backup file and the version
of Oracle Rdb you are running. Be sure the environment version
matches, or is greater than, the version under which the
backup file was created.
o RMU Restore might create an .rdb file and .rda files when it
starts up. If you specify the Log qualifier, these files will
be noted in the log file. These are not database files until
the end of the operation when they have been populated with
the backed-up contents. Therefore, if the restore operation
aborts or is stopped using Ctrl/Y, you must delete these
unpopulated files by using the DCL DELETE command. You know
which files to delete by the contents of the backup file and
the form of the command issued, or by examining the output
in the log file if you specified the Log qualifier. Deleting
the files usually requires OpenVMS privileges. Until they are
restored, these files are not a database, and Oracle RMU or
SQL operations do not function with them.
o RMU Restore preserves any area reservations and after-image
journal (.aij) file reservations that exist in the backed-up
database.
o If you restore a database without its root file ACL (using the
Noacl qualifier with the RMU Restore command, for example),
a user who wants to create ACL entries for the database must
have the OpenVMS SECURITY or BYPASS privilege.
o The RMU Restore command with the Area and Online qualifiers
requires exclusive access to the area files being restored.
The RMU Restore command with the Area, Online, and Just_
Corrupt qualifiers requires exclusive access to only the pages
being restored.
o There are no restrictions on the use of the Nospams qualifier
with storage areas that have a mixed page format, but the use
of the Nospams qualifier typically causes severe performance
degradation. The Nospams qualifier is useful only where
updates are rare and batched, and access is primarily by
database key (dbkey).
o The RMU Restore command automatically uses the RMU Convert
command when restoring the database to a system with a
more recent version of Oracle Rdb software. When this is
done, the metadata in the Oracle Rdb database changes and
invalidates incremental backup files from the previous
version. By default, no areas are reserved and one .aij file
is reserved. (You can override the after-image journal default
reservation by using the Aij_Options qualifier.) See Convert
for information on the versions of Oracle Rdb that the Convert
command supports.
o Always back up your Oracle Rdb databases as recommended in the
Oracle Rdb Installation and Configuration Guide just prior to
installing a newer version of Oracle Rdb software. The last
backup file made prior to converting to a more recent version
of Oracle Rdb should be a full and complete backup file.
o See the Oracle Rdb Guide to Database Maintenance for
information on the steps RMU Restore follows in tape label
checking when you restore a database from tape.
o RMU Restore might initialize the SPAM thresholds for some data
pages of some storage areas that have a uniform page format
to values that are not acceptable to the RMU Verify command.
This occurs when some of the data pages in a logical area are
restored before the logical area definition (Area Inventory).
This is not a frequent occurrence, and when it does happen,
the consequences are usually cosmetic (the RMU Verify command
issues a warning message for each page affected). However, if
many pages are affected, the volume of warnings can cause you
to overlook a real problem. Moreover, in some cases, this can
result in additional I/O operations when new data is stored in
an affected table.
As a workaround, you can use the RMU Repair command to
reconstruct the SPAM pages in one or more storage areas. The
RMU Repair command corrects the condition caused by the RMU
Restore command as well as other SPAM page corruptions. See
the help entry for the RMU Repair command for more information
on the RMU Repair command.
28.6 – Examples
Example 1
The following example restores the mf_personnel database from
the backup file pers_bu.rbf and requests a new version of the
database file. Because the After_Journal qualifier has been
specified, automatic recovery will not be attempted.
$ RMU/RESTORE/NEW_VERSION/AFTER_JOURNAL=AIJ_DISK:[AIJS]PERSAIJ -
_$ /NOCDD_INTEGRATE/LOG PERS_BU -
_$ EMP_INFO /THRESHOLDS=(65,75,80)/BLOCKS_PER_PAGE=3
The command changes the .aij file location and name to
AIJ_DISK:[AIJS]PERSAIJ.AIJ, prevents integration with the data
dictionary, and displays the progress of the restore operation.
For the storage area, EMP_INFO, the command changes the SPAM
threshold values to 65%, 75%, and 80%, and increases the number
of blocks per page to 3 blocks.
Example 2
Assume that at 10 A.M., Wednesday, October 25, 2005, a disk
device hardware failure corrupted all the files on the device,
including the mf_personnel.rdb file. The following command
restores the full database backup file (pers_full_oct22.rbf)
created on the previous Sunday and then restores the incremental
backup file made on Tuesday. Note that an incremental database
backup file was created on Monday, but each new incremental
backup file made since the latest full backup file replaces
previous incremental backup files made since the last full backup
operation.
$ RMU/RESTORE/LOG/NORECOVERY MUA1:PERS_FULL_OCT22.RBF
$ RMU/RESTORE/INCREMENTAL/CONFIRM/LOG/NORECOVERY -
_$ PERS_INCR_OCT24.RBF
At this point, the database is current up until 11:30 P.M.,
Tuesday, when the last incremental backup file was made of mf_
personnel. Because after-image journaling is enabled for this
database, automatic recovery of the .aij file could have been
employed. However, if the recovery process should fail for
some reason or, as in this case, the Norecovery qualifier is
specified, you can still use the RMU Recover command to apply
the .aij file that contains changes made to the database from
11:30 P.M., Tuesday, until just before the hardware failure to
the restored mf_personnel.rdb file and its storage area files.
For example:
$ RMU/RECOVER/UNTIL = "25-OCT-2005 09:55:00.00" -
_$ AIJ_DISK:[AIJS]PERSAIJ.AIJ;1
Example 3
If a storage area is on a disk that fails, you might want to
move that storage area to another disk by using the RMU Restore
command. The following RMU Restore command restores only the
EMPIDS_OVER storage area from the full backup file of mf_
personnel, and moves the EMPIDS_OVER storage area and snapshot
(.snp) file to a new location on the 333$DUA11 disk. The recovery
operation is only required if the required .aij file has been
backed up and is no longer in the current aij state.
$ RMU/RESTORE/AREA 222$DUA20:[BACKUPS]MF_PERS_BU.RBF -
_$ EMPIDS_OVER /FILE=333$DUA11:[DBS]EMPIDS_OVER.RDA -
_$ /SNAPSHOT=(FILE=333$DUA11:[DBS]EMPIDS_OVER.SNP)
$ !
$ ! Recovery from the after-image journal is automatic. If
$ ! automatic recovery is not possible, or if the Norecovery
$ ! qualifier had been specified, perform the following:
$ !
$ RMU/RECOVER/AREA AIJ_DISK:PERS.AIJ
Example 4
The following example demonstrates how you can use by-area backup
and restore operations for a single storage area in the mf_
personnel database. In addition, it demonstrates the use of the
automatic recovery feature of the RMU Restore command.
$ !
$ ! Create an .aij file for the database. Because three
$ ! .aij files are created, fixed-size .aij
$ ! journaling will be used.
$ !
$ RMU/SET AFTER_JOURNAL/ENABLE/RESERVE=4 -
_$ /ADD=(name=AIJ1, FILE=DISK2:[CORP]AIJ_ONE) -
_$ /ADD=(name=AIJ2, FILE=DISK2:[CORP]AIJ_TWO) -
_$ /ADD=(NAME=AIJ3, FILE=DISK2:[CORP]AIJ_THREE) -
_$ MF_PERSONNEL.RDB
%RMU-W-DOFULLBCK, full database backup should be done to
ensure future recovery
$ RMU/BACKUP MF_PERSONNEL DISK3:[BACKUP]MF_PERS.RBF
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> --
SQL> -- On Monday, define a new row in the DEPARTMENTS table. The
SQL> -- new row is stored in the DEPARTMENTS storage area.
SQL> --
SQL> INSERT INTO DEPARTMENTS
cont> (DEPARTMENT_CODE, DEPARTMENT_NAME, MANAGER_ID,
cont> BUDGET_PROJECTED, BUDGET_ACTUAL)
cont> VALUES ('WLNS', 'Wellness Center', '00188', 0, 0);
1 row inserted
SQL>
SQL> COMMIT;
SQL> EXIT;
$ !
$ ! Assume that you know that the only storage area ever updated in
$ ! the mf_personnel database on Tuesdays is the SALARY_HISTORY
$ ! storage area, and you decide that you will create an incremental
$ ! backup file of just the SALARY_HISTORY storage area on Tuesday.
$ ! Before you perform the by-area backup operation on the
$ ! SALARY_HISTORY storage area on Tuesday, you must perform a full
$ ! and complete backup operation on the mf_personnel database when
$ ! it is in a known and working state.
$ !
$ RMU/BACKUP MF_PERSONNEL.RDB -
_$ DISK3:[BACKUP]MF_MONDAY_FULL.RBF
$ !
SQL> --
SQL> -- On Tuesday, two rows are updated in
SQL> -- the SALARY_HISTORY storage area.
SQL> --
SQL> UPDATE SALARY_HISTORY
cont> SET SALARY_END ='20-JUL-2003 00:00:00.00'
cont> WHERE SALARY_START='14-JAN-1993 00:00:00.00'
cont> AND EMPLOYEE_ID = '00164';
1 row updated
SQL> UPDATE SALARY_HISTORY
cont> SET SALARY_START ='5-JUL-2000 00:00:00.00'
cont> WHERE SALARY_START='5-JUL-1990 00:00:00.00'
cont> AND EMPLOYEE_ID = '00164';
1 row updated
SQL> COMMIT;
SQL> EXIT;
$ !
$ ! On Tuesday, you create an incremental backup file of the
$ ! SALARY_HISTORY storage area only. Only the SALARY_HISTORY
$ ! storage area is included in the by-area backup file.
$ ! Oracle RMU provides an informational message telling
$ ! you that not all storage areas in the database are included
$ ! in the mf_tuesday_partial.rbf backup file.
$ RMU/BACKUP/INCLUDE=(SALARY_HISTORY) -
_$ /INCREMENTAL/LOG DISK1:[USER]MF_PERSONNEL.RDB -
_$ DISK3:[BACKUPS]MF_TUESDAY_PARTIAL.RBF
%RMU-I-NOTALLARE, Not all areas will be included in
this backup file
%RMU-I-LOGLASCOM, Last full and complete backup was dated
18-JAN-2006 11:19:46.31
%RMU-I-BCKTXT_00, Backed up root file
DISK1:[DB]MF_PERSONNEL.RDB;1
%RMU-I-BCKTXT_03, Starting incremental backup of
storage area DISK3:[SA}SALARY_HISTORY.RDA;1 at
18-JAN-2006 11:20:49.29
%RMU-I-BCKTXT_13, Completed incremental backup of
storage area DISK3:[SA]SALARY_HISTORY.RDA;1 at
18-JAN-2006 11:20:49.40
%RMU-I-COMPLETED, BACKUP operation completed at
18-JSN-2006 11:20:49.59
.
.
.
$ !
SQL> -- Update another row in the SALARY_HISTORY table:
SQL> UPDATE SALARY_HISTORY
cont> SET SALARY_START ='23-SEP-1991 00:00:00.00'
cont> WHERE SALARY_START='21-SEP-1981 00:00:00.00'
cont> AND EMPLOYEE_ID = '00164';
1 row updated
SQL> COMMIT;
SQL> EXIT;
$ ! Assume that a disk device hardware error occurs here
$ ! and only the SALARY_HISTORY storage area and snapshot
$ ! file is lost. Also assume that the database root (.rdb)
$ ! file and other storage areas in the database are still
$ ! fine and do not need to be restored or recovered.
$ ! Therefore, you do not need to restore the .rdb file or
$ ! other storage areas from the full and complete backup
$ ! file. Because only the SALARY_HISTORY storage area was
$ ! lost, you must do the following:
$ ! 1) Restore the SALARY_HISTORY storage area and snapshot
$ ! file from the last full and complete backup file. Note
$ ! this operation can be done on line. Specify the Norecovery
$ ! qualifier because you still have an incremental restore
$ ! operation to perform.
$ ! 2) Restore the SALARY_HISTORY storage area from the last
$ ! incremental backup file. Note this operation can be
$ ! done on line. This time do not specify the Norecovery
$ ! qualifier so that the automatic recovery provided by
$ ! Oracle RMU will be implemented.
$ !
$ RMU/RESTORE/NOCDD_INTEGRATE/ONLINE/LOG/NORECOVERY -
_$ /AREA DISK3:[BACKUP]MF_MONDAY_FULL.RBF SALARY_HISTORY
%RMU-I-RESTXT_21, Starting full restore of storage area
DISK1:[USER]SALARY_HISTORY.RDA;1 at 18-JAN-2006 11:25:13.17
%RMU-I-RESTXT_24, Completed full restore of storage area
DISK1:[USER]SALARY_HISTORY.RDA;1 at 18-JAN-2006 11:25:13.86
%RMU-I-RESTXT_01, Initialized snapshot file
DISK1:[USER]SALARY_HISTORY.SNP;1
%RMU-I-LOGINIFIL, contains 100 pages, each page is 2
blocks long
%RMU-I-AIJWASON, AIJ journaling was active when the database
was backed up
%RMU-I-AIJRECFUL, Recovery of the entire database starts with
AIJ file sequence 0
%RMU-I-AIJRECARE, Recovery of area SALARY_HISTORY starts with
AIJ file sequence 0
%RMU-I-COMPLETED, RESTORE operation completed at 18-JAN-2006 11:25:14.51
$ RMU/RESTORE/NOCDD_INTEGRATE/INCREMENTAL/ONLINE/LOG -
_$ /AREA DISK3:[BACKUPS]MF_TUESDAY_PARTIAL.RBF SALARY_HISTORY
DISK1:[USER]MF_PERSONNEL.RDB;1, restore incrementally? [N]:Y
%RMU-I-RESTXT_22, Starting incremental restore of storage area
DISK1:[USER]SALARY_HISTORY.RDA;1 at 18-JAN-2006 11:29:35.54
%RMU-I-RESTXT_25, Completed incremental restore of storage area
DISK1:[USER]SALARY_HISTORY.RDA;1 at 18-JAN-2006 11:29:35.64
%RMU-I-RESTXT_01, Initialized snapshot file
DISK1:[USER]SALARY_HISTORY.SNP;1
%RMU-I-LOGINIFIL, contains 100 pages, each page is 2
blocks long
%RMU-I-AIJWASON, AIJ journaling was active when the database
was backed up
%RMU-I-AIJRECFUL, Recovery of the entire database starts with
AIJ file sequence 0
%RMU-I-AIJRECARE, Recovery of area SALARY_HISTORY starts with
AIJ file sequence 0
%RMU-I-AIJBADAREA, inconsistent storage area
DISK1:[USER]SALARY_HISTORY.RDA;1 needs AIJ sequence number 0
%RMU-I-LOGRECDB, recovering database file
DISK1:[USER]MF_PERSONNEL.RDB;1
%RMU-I-AIJAUTOREC, starting automatic after-image journal recovery
%RMU-I-LOGOPNAIJ, opened journal file DISK2:[CORP]AIJ_ONE.AIJ;17
%RMU-I-AIJONEDONE, AIJ file sequence 0 roll-forward operations completed
%RMU-I-LOGRECOVR, 1 transaction committed
%RMU-I-LOGRECOVR, 0 transactions rolled back
%RMU-I-LOGRECOVR, 3 transactions ignored
%RMU-I-AIJNOACTIVE, there are no active transactions
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJALLDONE, after-image journal roll-forward operations completed
%RMU-I-LOGSUMMARY, total 1 transaction committed
%RMU-I-LOGSUMMARY, total 0 transactions rolled back
%RMU-I-LOGSUMMARY, total 3 transactions ignored
%RMU-I-AIJSUCCES, database recovery completed successfully
Example 5
In the following example, the options file specifies that the
storage area (.rda) files are to be restored to different disks.
Note that storage area snapshot (.snp) files are restored to
different disks from one another and from their associated
storage area (.rda) files; this is recommended for optimal
performance. (This example assumes that the disks specified for
each storage area file in options_file.opt are different from
those where the storage area files currently reside.)
$ RMU/RESTORE/NOCDD_INTEGRATE/OPTIONS=OPTIONS_FILE.OPT -
_$ MF_PERS_BCK.RBF
$ TYPE OPTIONS_FILE.OPT
EMPIDS_LOW /FILE=DISK1:[CORPORATE.PERSONNEL]EMPIDS_LOW.RDA -
/SNAPSHOT=(FILE=DISK2:[CORPORATE.PERSONNEL]EMPIDS_LOW.SNP )
EMPIDS_MID /FILE=DISK3:[CORPORATE.PERSONNEL]EMPIDS_MID.RDA -
/SNAPSHOT=(FILE=DISK4:[CORPORATE.PERSONNEL]EMPIDS_MID.SNP )
EMPIDS_OVER /FILE=DISK5:[CORPORATE.PERSONNEL]EMPIDS_OVER.RDA -
/SNAPSHOT=(FILE=DISK6:[CORPORATE.PERSONNEL]EMPIDS_OVER.SNP )
DEPARTMENTS /FILE=DISK7:[CORPORATE.PERSONNEL]DEPARTMENTS.RDA -
/SNAPSHOT=(FILE=DISK8:[CORPORATE.PERSONNEL]DEPARTMENTS.SNP )
SALARY_HISTORY /FILE=DISK9:[CORPORATE.PERSONNEL]SALARY_HISTORY.RDA -
/SNAPSHOT=(FILE=DISK10:[CORPORATE.PERSONNEL]SALARY_HISTORY.SNP )
JOBS /FILE=DISK7:[CORPORATE.PERSONNEL]JOBS.RDA -
/SNAPSHOT=(FILE=DISK8:[CORPORATE.PERSONNEL]JOBS.SNP )
EMP_INFO /FILE=DISK9:[CORPORATE.PERSONNEL]EMP_INFO.RDA -
/SNAPSHOT=(FILE=DISK10:[CORPORATE.PERSONNEL]EMP_INFO.SNP )
RESUME_LISTS /FILE=DISK11:[CORPORATE.PERSONNEL]RESUME_LISTS.RDA -
/SNAPSHOT=(FILE=DISK12:[CORPORATE.PERSONNEL]RESUME_LISTS.SNP )
RESUMES /FILE=DISK9:[CORPORATE.PERSONNEL]RESUMES.RDA -
/SNAPSHOT=(FILE=DISK10:[CORPORATE.PERSONNEL]RESUMES.SNP )
Example 6
The following example shows what .aij file sequence to use
following an RMU Restore command with the Area qualifier if
automatic recovery fails:
$ RMU/RESTORE/AREA MFPERS_62691.RBF -
DEPARTMENTS, JOBS
.
.
.
%RMU-I-AIJWASON, AIJ journaling was active when the
database was backed up
%RMU-I-AIJRECFUL, Recovery of the entire database
starts with AIJ file sequence 0
Example 7
The following example shows how to move a single-file database to
a new directory, using the RMU Backup and RMU Restore commands:
$ RMU/BACKUP PERSONNEL PERS
$!
$ RMU/RESTORE/NOCDD/NOAFTER_JOURNAL -
_$ /DIRECTORY=DISK4:[USER2] PERS
Example 8
The following example shows how to rename a single-file database
when you move the database by using the RMU Backup and RMU
Restore commands:
$ RMU/BACKUP PERSONNEL PERS
$!
$ RMU/RESTORE/NOCDD/NOAFTER_JOURNAL -
_$ /DIRECTORY=DISK4:[USER2]TEST_PERSONNEL PERS
Example 9
The following example causes the database being restored from
the mf_pers_bck.rbf backup file to have 60 global buffers, with
a limit of 2 buffers for each database user. Because the Enabled
option is used, global buffering is in effect for the database
immediately after it is restored:
$ RMU/RESTORE/NOCDD/GLOBAL_BUFFERS=(ENABLED,TOTAL=60,USER_LIMIT=2) -
_$ MF_PERS_BCK.RBF
Example 10
The following command causes the SALARY_HISTORY storage area
from the database being restored from the mf_pers_bu.rbf backup
file to be restored as a read-only storage area. None of the
other database storage areas are modified as part of this restore
operation.
$ RMU/RESTORE/NOCDD MF_PERS_BU.RBF SALARY_HISTORY /READ_ONLY
Example 11
The following example assumes that you are using multiple tape
drives to perform a large restore operation. By specifying the
Loader_Synchronization and Volumes qualifiers, this command does
not require you to load tapes as each completes. Instead, you
can load tapes on a loader or stacker and the RMU restore process
will wait until all concurrent tape operations have concluded
for one set of tape volumes before assigning the next set of tape
volumes. This example assumes that the backup operation used two
tape output threads and each thread wrote four tapes.
This example uses Master qualifiers to indicate that you want the
$111$MUA0: and $444$MUA2: drives to be master drives.
Using this example, you would:
1. Allocate each tape drive.
2. Manually place tapes BACK01 and BACK05 on the $111$MUA0:
master drive.
3. Manually place tapes BACK02 and BACK06 on the $333$MUA2:
master drive.
4. Manually place tapes BACK03 and BACK07 on the $222$MUA1: slave
drive.
5. Manually place tapes BACK04 and BACK08 on the $444$MUA3: slave
drive.
6. Mount the first volume (BACK01).
7. Perform the restore operation.
8. Dismount the last tape mounted.
9. Deallocate each tape drive.
$ ALLOCATE $111$MUA0:
$ ALLOCATE $222$MUA1:
$ ALLOCATE $333$MUA2:
$ ALLOCATE $444$MUA3:
$
$ MOUNT/FOREIGN $111$MUA0:
$
$ RMU/RESTORE/LOG/REWIND/LOADER_SYNCHRONIZATION -
_$ /LABEL=(BACK01, BACK02, BACK03, BACK04, BACK05, -
_$ BACK06, BACK07, BACK08) -
_$ /VOLUMES=8 -
_$ $111$MUA0:PERS_FULL_MAR30.RBF/MASTER, $222$MUA1: -
_$ $333$MUA2:/MASTER, $444$MUA3
$
$ DISMOUNT $222$MUA3:
$
$ DEALLOCATE $111$MUA0:
$ DEALLOCATE $222$MUA1:
$ DEALLOCATE $333$MUA2:
$ DEALLOCATE $444$MUA3:
Example 12
The following example demonstrates the automatic .aij recovery
mechanism in the RMU Restore command. The example does the
following:
o Uses the RMU Set After_Journal command to reserve space for
four .aij files, adds three .aij files, and enables after-
image journaling
o Performs a backup operation on the database
o Performs database update activity, which will be written to an
.aij file
o Determines the database root file is lost
o Restores and recovers the database in one RMU Restore command
$ SET DEFAULT DISK1:[USER]
$ !
$ RMU/SET AFTER_JOURNAL/ENABLE/RESERVE=4 -
_$ /ADD=(name=AIJ1, FILE=DISK2:[CORP]AIJ_ONE) -
_$ /ADD=(name=AIJ2, FILE=DISK2:[CORP]AIJ_TWO) -
_$ /ADD=(NAME=AIJ3, FILE=DISK2:[CORP]AIJ_THREE) -
_$ MF_PERSONNEL
%RMU-W-DOFULLBCK, full database backup should be done
to ensure future recovery
$ !
$ ! Back up database, as instructed.
$ !
$ RMU/BACKUP MF_PERSONNEL DISK3:[BACKUPS]MF_PERS.RBF
$ !
$ ! Database update activity occurs.
$ !
$!
$! Database is lost. Issue the RMU Restore command to
$! restore and recover the database. Because the Norecovery
$! qualifier is not specified, Oracle RMU will
$! automatically attempt to recover the database.
$!
$ RMU/RESTORE DISK3:[BACKUPS]MF_PERS.RBF/NOCDD_INTEGRATE
%RMU-I-AIJRSTAVL, 3 after-image journals available for use
%RMU-I-AIJRSTMOD, 1 after-image journal marked as "modified"
%RMU-I-AIJISON, after-image journaling has been enabled
%RMU-W-DOFULLBCK, full database backup should be done
to ensure future recovery
%RMU-I-LOGRECDB, recovering database file
DISK1:[USER]MF_PERSONNEL.RDB;1
%RMU-I-AIJAUTOREC, starting automatic after-image
journal recovery
%RMU-I-AIJONEDONE, AIJ file sequence 0 roll-forward
operations completed
%RMU-I-AIJONEDONE, AIJ file sequence 1 roll-forward
operations completed
%RMU-W-NOTRANAPP, no transactions in this journal
were applied
%RMU-I-AIJALLDONE, after-image journal roll-forward
operations completed
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJFNLSEQ, to start another AIJ file recovery,
the sequence number needed will be 1
Example 13
The following example demonstrates how to restore and recover all
the corrupt pages and areas in the mf_personnel database. Assume
that the RMU Show Corrupt_Pages command shows that the JOBS
storage area is corrupt and that only page 3 in the DEPARTMENTS
storage area is corrupt. All the other storage areas are neither
corrupt nor inconsistent. Because the Just_Corrupt qualifier is
specified in the global position, and mf_personnel.rbf is a full
backup file, the RMU restore process restores all of the JOBS
storage area and just page 3 in the DEPARTMENTS storage area.
If after-image journaling is enabled, automatic recovery will be
attempted.
$ RMU/RESTORE/AREA/JUST_CORRUPT MF_PERSONNEL.RBF
Example 14
The following example demonstrates how to restore and recover
specific corruptions in the mf_personnel database. Like example
12, assume that the RMU Show Corrupt_Pages command shows that
the JOBS storage area is corrupt and that only page 3 in the
DEPARTMENTS storage area is corrupt. All the other storage
areas are neither corrupt nor inconsistent. The backup file,
mf_partial.rbf, is a by-area backup file containing backups of
the JOBS, DEPARTMENTS, and SALARY_HISTORY storage areas. In this
example, the JOBS, DEPARTMENTS, and SALARY_HISTORY areas are
specified for restoring. Because the SALARY_HISTORY area contains
no corruptions, an informational message is returned. The RMU
restore process restores all of the JOBS storage area and just
page 3 in the DEPARTMENTS storage area. If after-image journaling
is enabled, automatic recovery will be attempted.
$ RMU/RESTORE/JUST_CORRUPT/AREA MF_PARTIAL.RBF JOBS, -
_$ DEPARTMENTS,SALARY_HISTORY
%RMU-I-RESTXT_20, Storage area DISK1:[AREA]SALARY_HISTORY.RDA;1 is not
corrupt and will not be restored
Example 15
The following example demonstrates how to restore and recover
specific corruptions in the mf_personnel database along with
restoring an area that is not corrupt. Like example 13, assume
that the RMU Show Corrupt_Pages command shows that the JOBS
storage area is corrupt and that only page 3 in the DEPARTMENTS
storage area is corrupt. All the other storage areas are neither
corrupt nor inconsistent. The backup file, mf_personnel.rbf, is a
full backup file. In this example, the Just_Corrupt qualifier is
used locally with the DEPARTMENTS storage area.
The JOBS, DEPARTMENTS, and SALARY_HISTORY areas are specified
for restoring. Although the SALARY_HISTORY area contains no
corruptions, an informational message is not returned in this
case because by specifying the Just_Corrupt qualifier locally
with DEPARTMENTS, the Restore command is requesting that the RMU
restore process restore the JOBS and SALARY_HISTORY storage areas
regardless of corruptions, and the DEPARTMENTS storage area be
restored to fix corruptions. The RMU restore process restores
all of the JOBS and SALARY_HISTORY storage areas and just page
3 in the DEPARTMENTS storage area. If after-image journaling is
enabled, automatic recovery will be attempted.
$ RMU/RESTORE/AREA MF_PERSONNEL.RBF JOBS, SALARY_HISTORY, -
_$ DEPARTMENTS/JUST_CORRUPT
Example 16
The following example is the same as example 15, except the Just_
Corrupt qualifier is specified locally with the SALARY_HISTORY
storage area. Because the SALARY_HISTORY qualifier contains no
corruptions, an error message is returned:
$ RMU/RESTORE/AREA MF_PERSONNEL.RBF JOBS,SALARY_HISTORY/JUST_CORRUPT, -
_$ DEPARTMENTS/JUST_CORRUPT
%RMU-I-RESTXT_20, Storage area DISK1:[AREA]SALARY_HISTORY.RDA;1 is
not corrupt and will not be restored
Example 17
The following example demonstrates the behavior of the RMU
Restore command when the Just_Corrupt qualifier is used both
globally and locally. The global use of the Just_Corrupt
qualifier overrides an local use of the qualifier. In this case,
the RMU restore process restores the JOBS, SALARY_HISTORY, and
DEPARTMENTS storage areas only if they contain corruptions;
otherwise an error is returned. Assume, like the previous
examples, that only the JOBS and DEPARTMENTS storage areas
contain corruptions:
$ RMU/RESTORE/JUST_CORRUPT/AREA MF_PERSONNEL.RBF SALARY_HISTORY, -
_$ JOBS/JUST_CORRUPT, DEPARTMENTS/JUST_CORRUPT
%RMU-I-RESTXT_20, Storage area DISK1:[AREA]SALARY_HISTORY.RDA;1 is
not corrupt and will not be restored
28.7 – Examples (Cont.)
Example 18
The following example demonstrates the use of the Directory,
File, and Root qualifiers. In this example:
o The default directory is specified as DISK2:[DIR].
o The target directory and file name for the database root file
is specified with the Root qualifier. The target directory
specified with the Root qualifier overrides the default
directory specified with the Directory qualifier. Thus, the
RMU restore process restores the database root in DISK3:[ROOT]
and names it COPYRDB.RDB.
o The target directory for the EMPIDS_MID storage area is
DISK4:[FILE]. The RMU restore process restores EMPIDS_MID
in DISK4:[FILE].
o The target file name for the EMPIDS_LOW storage area is
EMPIDS. Thus, the RMU restore process restores the EMPIDS_LOW
storage area to the DISK2:[DIR] default directory (specified
with the Directory qualifier), and names the file EMPIDS.RDA.
o The target for the EMPIDS_LOW snapshot file is
DISK5:[SNAP]EMPIDS.SNP. Thus, the RMU restore process restores
the EMPIDS_LOW snapshot file to DISK5:[SNAP]EMPIDS.SNP.
o All the other storage area files and snapshot files in the mf_
personnel database are restored in DISK2:[DIR]; the file names
for these storage areas and snapshot files remain unchanged.
$ RMU/RESTORE MF_PERSONNEL.RBF -
_$ /DIRECTORY=DISK2:[DIR] -
_$ /ROOT=DISK3:[ROOT]MF_PERSONNEL.RDB -
_$ EMPIDS_MID/FILE=DISK4:[FILE], -
_$ EMPIDS_LOW/FILE=EMPIDS -
_$ /SNAPSHOT=(FILE=DISK5:[SNAP]EMPIDS.SNP)
Example 19
The following example demonstrates how to restore a database
such that the newly restored database will allow read-only
transactions only. After the RMU restore process executes the
command, the database is ready for you to start Hot Standby
replication operations. See the Oracle Rdb7 and Oracle CODASYL
DBMS: Guide to Hot Standby Databases for details on starting Hot
Standby replication operations.
$RMU/RESTORE/TRANSACTION_MODE=READ_ONLY MF_PERSONNEL.RBF
Example 20
The following example uses the Nocommit qualifier while restoring
a backup file of a database that has a structure level of V7.1 in
a V7.2 environment.
$ RMU/SHOW VERSION
Executing RMU for Oracle Rdb V7.2-00
$ RMU/RESTORE MFP71.RBF /NOCOMMIT/NOCDD/NORECOVER
%RMU-I-AIJRSTAVL, 0 after-image journals available for use
%RMU-I-AIJISOFF, after-image journaling has been disabled
%RMU-I-LOGCONVRT, database root converted to current structure level
%RMU-S-CVTDBSUC, database USER1:[80]MF_PERSONNEL.RDB;1 successfully
converted from version V7.1 to V7.2
%RMU-W-USERECCOM, Use the RMU Recover command. The journals are not
available.
$ RMU/SHOW VERSION
Executing RMU for Oracle Rdb V7.2-00
$ RMU/CONVERT/ROLLBACK MF_PERSONNEL.RDB
%RMU-I-RMUTXT_000, Executing RMU for Oracle Rdb V7.2-00
Are you satisfied with your backup of RDBVMS_USER1:[V71]MF_PERSONNEL.RDB;1
and your backup of any associated .aij files [N]? Y
%RMU-I-LOGCONVRT, database root converted to current structure level
%RMU-I-CVTROLSUC, CONVERT rolled-back for RDBVMS_USER1:[V71]MF_PERSONNEL.
RDB;1 to version V7.1
Example 21
The following example uses the Close_Wait qualifier to set the
database close mode to TIMED AUTOMATIC, specifying that the
database will be closed automatically in 10 minutes.
$ RMU/RESTORE/OPEN_MODE=AUTOMATIC/CLOSE_WAIT=10/DIR=DISK:[DIR] TEST_DB.RBF
$ RMU/DUMP/HEADER=PARAMETERS TEST_DB.RDB
Example 22
The following example demonstrates that /SNAPSHOT=(ALLOCATION=N)
is a positional qualifier. The behavior is different (local
or global) depending on the placement of the qualifier on the
command line. In the following example, it is used both globally
and locally.
MALIBU-> RMU/RESTORE/NOCDD -
/DIR=SYS$DISK:[]/SNAP=ALLO=12345 [JONES.RDB]MF_PERSONNEL_V71.RDF -
DEPARTMENTS/SNAP=ALLO=2
MALIBU-> DIR/SIZE *.SNP
Directory DBMS_USER3:[JONES.WORK]
DEPARTMENTS.SNP;1 6
EMPIDS_LOW.SNP;1 24692
EMPIDS_MID.SNP;1 24692
EMPIDS_OVER.SNP;1 24692
EMP_INFO.SNP;1 24692
JOBS.SNP;1 24692
MF_PERS_DEFAULT.SNP;1
24692
MF_PERS_SEGSTR.SNP;1
24692
SALARY_HISTORY.SNP;1
24692
Total of 9 files, 197542 blocks.
Example 23
The following example demonstrates how /SNAPSHOT=(ALLOCATION=N)
can be used to alter the parameters of the restored database from
those defined at the time of the database backup. /SNAPSHOT is
ofter used with /FILE: /FILE for the storage area RDA file and
/SNAPSHOT for the storage area snapshot file.
$ RMU/RESTORE MFP.RBF -
/DIRECTORY=DISK1:[DIRECTORY] -
/ROOT=DISK2:[DIRECTORY]MF_PERSONNEL.RDB -
EMPIDS_MID /FILE=[DISK3:[DIRECTORY] /SNAPSHOT=(ALLOCATION=2000), -
EMPIDS_LOW /FILE=[DISK3:[DIRECTORY]NEWNAME -
/SNAPSHOT=(FILE=DISK4:[DIR]NEWNAME, ALLOCATION=3000)
In this example, the root would go to one disk, EMPIDS_MID
would go to another, EMPIDS_LOW to another disk and the snap
to another disk and both snaps would be allocated the specified
number of pages. All the other snaps and RDA files would go to
where /DIRECTORY points (and the snaps would keep their original
allocation).
28.8 – Only Root
Permits you to recover more quickly from the loss of a database
root (.rdb) file by restoring only the root file. This command is
not valid for single-file databases.
28.8.1 – Description
The RMU Restore Only_Root command rebuilds only the database root
(.rdb) file from a backup file, produced earlier by an RMU Backup
command, to the condition the .rdb file was in when the backup
operation was performed. Use the command qualifiers to update
the .rdb file. The area qualifiers alter only the .rdb file, not
the storage areas themselves. Use the area qualifiers to correct
the restored backup root file so that it contains storage area
information that was updated since the last backup operation was
performed on the database. This is useful when you need to match
the root from an older backup file of your database with the area
information in the more recent backup file of your database in
order to have a usable database.
When the .rdb file is restored by itself, be sure that you
correctly set the transaction state of the database with the
Initialize_Tsns qualifier or the Set_Tsn qualifier. If the
database transaction sequence number (TSN) and commit sequence
number (CSN) are not set to the same values as those that were
in the lost .rdb file, there will be an inconsistency in the
journaling if after-image journaling is enabled. Therefore, you
cannot recover the database by using journal files created before
you used either the Initialize_Tsns qualifier or the Set_Tsn
qualifier in a restore-only-root operation.
You should set the TSN to a value equal to or greater than the
value that was in the lost .rdb file. If the TSN is set to a
lower value than the value stored in the lost database root file,
the database is corrupted, and it might return incorrect data or
result in application failures. If the number you have selected
is less than the Next CSN and Next TSN values, you will receive a
fatal error message as follows:
%RMU-F-VALLSSMIN, value (0:40) is less than minimum allowed
value (0:74) for Set_Tsn=tsn
After the set TSN and reinitialize TSN operations
complete, and after you have verified the .rdb
file, enabled after-image journaling, and the
new .aij file is created, all .aij records are based on the new
starting TSN and CSN numbers in the .rdb file.
Although Oracle Corporation recommends that your backup strategy
ensures that you maintain a current full and complete database
backup file, it is possible to restore the database from
current full by-area backup files only. This is accomplished by
restoring the root and specifying the Noupdate_Files and Noset_
Tsn qualifiers. When you specify the Noset_Tsn qualifier, the
TSN and CSN values on the restored database will be the same as
those recorded in the backup file. When you specify the Noupdate_
Files qualifier, the database root is restored but RMU Restore
Only_Root will not link that restored root to any of the area
files, nor will it create or update the snapshot (.snp) files. By
specifying the Noupdate_Files and Noset_Tsn qualifiers with the
RMU Restore Only_Root command, you can use the following strategy
to restore your database:
1. Restore the root from the most recent full by-area backup
file.
2. Restore the storage areas by applying the by-area backup files
in reverse order to their creation date.
Apply the most recent by-area backup file first and the oldest
by-area backup file last. (Be sure you do not restore any area
more than once.)
3. Recover the database by applying the after-image journal
(.aij) files.
You can recover the .aij files manually by using the RMU
Recover command. Or, if the state of your .aij files permits
it, you can allow RMU Restore Only_Root to automatically
recover the .aij files by not specifying the Norecovery
qualifier with the last RMU Restore command you issue. For
details on the automatic recovery feature of the RMU Restore
command, see the help entry for the RMU Restore command.
(The automatic recovery feature is not available for the RMU
Restore Only_Root command.)
When you use this strategy, be sure that the first RMU Restore
command after the RMU Restore Only_Root command includes the
most recent RDB$SYSTEM storage area. The RDB$SYSTEM storage area
contains the structures needed to restore the other database
storage areas. For this reason, Oracle Corporation suggests that
you back up the RDB$SYSTEM storage area in every by-area backup
operation you perform.
See Example 6 in the Examples help entry under this command for a
demonstration of this method.
Note that the database backup file must be recent-differences
between the database and backup file must be known, and the
number of storage areas must be unchanged since the backup file
was created. If you have moved a storage area, use the File
qualifier to show its new location and the Snapshot qualifier
to indicate the current version of the area's .snp file.
NOTE
You must perform a full and complete backup operation
on your database when the RMU Restore Only_Root command
completes. Oracle Corporation recommends that you define a
new after-image journal configuration with the RMU Restore
Only_Root command by using either the After_Journal or the
Aij_Options qualifier. This action ensures that the new
.aij file can be rolled forward in the event that another
database restore operation becomes necessary.
28.8.2 – Format
(B)0[mRMU/Restore/Only_Root backup-file-spec [storage-area-list]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Active_IO=max-reads x /Active IO=3
/[No]After_Journal=file-spec x See description
/[No]Aij_Options=journal-opts x See description
/Directory=directory-spec x See description
/[No]Initialize_Tsns x /Noinitialize_Tsns
/Label=(label-name-list) x See description
/Librarian[=options] x None
/[No]Log x Current DCL verify value
/[No]Media_Loader x See description
/[No]New_Snapshots x /Nonew_Snapshots
/Nodes_Max=number-cluster-nodes x Existing value
/Options=file-spec x None
/[No]Rewind x /Norewind
/Root=root-file-spec x Existing value
/[No]Set_Tsn=Tsn=n,Csn=m) x See description
/Transaction_Mode=(modes-list) x /Transaction_Mode=Current
/[No]Update_Files x /Update_Files
/Users_Max=number-users x Existing value
(B)0[m
[4mFile[m [4mor[m [4mArea[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Blocks_Per_Page=integer x /Noblocks_Per_Page
/File=file-spec x See description
/Read_Only x Current value
/Read_Write x Current value
/Snapshot=(Allocation=n,File=file-spec)x See description
/[No]Spams x Current value
/Thresholds=(val1[,val2[,val3]]) x Existing area file value
28.8.3 – Parameters
28.8.3.1 – backup-file-spec
A file specification for the backup file produced by a previous
RMU Backup command. The default file extension is .rbf.
Note that you cannot perform a remote restore operation on an
.rbf file that has been backed up to tape and then copied to
disk. When copying .rbf files to disk from tape, be sure to copy
them onto the system on which you will be restoring them.
Depending on whether you are performing a restore operation
from magnetic tape, disk, or multiple disks, the backup file
specification should be specified as follows:
o Restoring from magnetic tape
If you used multiple tape drives to create the backup file,
the backup-file-spec parameter must be provided with (and only
with) the first tape drive name. Additional tape drive names
must be separated from the first and subsequent tape drive
names with commas, as shown in the following example:
$ RMU/RESTORE /REWIND $111$MUA0:PERS_FULL_NOV30.RBF,$112$MUA1:
o Restoring from multiple or single disk files
If you used multiple disk files to create the backup file,
the backup-file-spec parameter must be provided with (and only
with) the first disk device name. Additional disk device names
must be separated from the first and subsequent disk device
names with commas. You must include the Disk_file qualifier.
For example:
RMU/RESTORE/ONLY_ROOT/DISK_FILE DISK1:[DIR1]MFP.RBF,DISK2:[DIR2],
DISK3:[DIR3]
As an alternative to listing the disk device names on the
command line (which can exceed the line-limit lenght for a
command line if you use several devices), you can specify an
options file in place of the backup-file-spec. For example:
$ RMU/RESTORE/ONLY-ROOT/DISK_FILE" @DEVICES.OPT"
The contents of devices.opt might appear as follows:
DISK1:[DIR1]MFP.RBF
DISK2:[DIR2]
DIS3:[DIR3]
The backup files referenced from sjuch an options file are:
DISK1:[DIR1]MFP.RBF
DISK2:[DIR2]MFP01.RBF
DISK3:[DIR3]MFP02.RBF
28.8.3.2 – storage-area-list
This option is a list of storage area names from the database.
Use it in the following situations:
o When you need to change the values for thresholds with the
Thresholds qualifier or blocks per page with the Blocks_Per_
Page qualifier
o When you need to change the names or version numbers specified
with the Snapshot or the File qualifier for the restored
database
To use the storage-area-list option, specify the storage area
name, not the system file name for the storage area. By restoring
the database root only, you save the additional time normally
needed to restore all the storage areas. Place commas between
each storage area name in the list.
If the storage area parameters have changed since the file was
last backed up, the storage-area-list option updates the .rdb
file parameters so they agree with the current storage area
parameters in terms of location and file version.
28.8.4 – Command Qualifiers
28.8.4.1 – Active IO
Active_IO=max-reads
Specifies the maximum number of read operations to the backup
file that the RMU Restore Only_Root command will attempt
simultaneously. The value of the Active_IO qualifier can range
from 1 to 5. The default value is 3.
28.8.4.2 – After Journal
After_Journal=file-spec
Noafter_Journal
NOTE
This qualifier is maintained for compatibility with versions
of Oracle Rdb prior to Version 6.0. You might find it more
useful to specify the Aij_Options qualifier, unless you are
only interested in creating extensible .aij files.
Specifies how RMU Restore Only_Root is to handle after-image
journaling and .aij file creation, using the following rules:
o If you specify the After_Journal qualifier and provide a file
specification, RMU Restore Only_Root creates a new extensible
.aij file and enables journaling.
o If you specify the After_Journal qualifier but you do not
provide a file specification, RMU Restore Only_Root creates
a new extensible .aij file with the same name as the journal
that was active at the time of the backup operation.
o If you specify the Noafter_Journal qualifier, RMU Restore
Only_Root disables after-image journaling and does not create
a new .aij file. Note that if you specify the Noafter_Journal
qualifier, there will be a gap in the sequence of .aij files.
For example, suppose your database has .aij file sequence
number 1 when you back it up. If you issue an RMU Restore
Only_Root command with the Noafter qualifier, the .aij file
sequence number will be changed to 2. This means that you
cannot (and do not want to) apply the original .aij file to
the restored database (doing so would result in a sequence
mismatch).
o If you do not specify an After_Journal, Noafter_Journal, Aij_
Options, or Noaij_Options qualifier, RMU Restore Only_Root
recovers the journal state (enabled or disabled) and tries to
reuse the .aij file or files.
If you choose this option, take great care to either set the
database root TSN and CSN correctly, or create a full and
complete backup file of the database. Failure to do so might
make it impossible for you to recover your database from the
.aij file should it become necessary.
However, if the .aij file or files are not available (for
example, they have been backed up), after-image journaling is
disabled.
You cannot use the After_Journal qualifier to create fixed-size
.aij files; use the Aij_Options qualifier.
28.8.4.3 – Aij Options
Aij_Options=journal-opts
Noaij_Options
Specifies how RMU Restore Only_Root is to handle after-image
journaling and .aij file creation, using the following rules:
o If you specify the Aij_Options qualifier and provide a
journal-opts file, RMU Restore Only_Root enables journaling
and creates the .aij file or files you specify for the
restored database. If only one .aij file is created for the
restored database, it will be an extensible .aij file. If two
or more .aij files are created for the database copy, they
will be fixed-size .aij files (as long as at least two .aij
files are always available).
o If you specify the Aij_Options qualifier, but do not provide a
journal-opts file, RMU Restore Only_Root disables journaling
and does not create any new .aij files.
o If you specify the Noaij_Options qualifier, RMU Restore Only_
Root disables journaling and does not create any new .aij
files.
o If you do not specify an After_Journal, Noafter_Journal, Aij_
Options, or Noaij_Options qualifier, RMU Restore Only_Root
recovers the journaling state (enabled or disabled) and tries
to reuse the .aij file or files.
If you choose this option, take great care to either set the
database root TSN and CSN correctly, or create a full and
complete backup file of the database. Failure to do so might
make it impossible for you to recover your database from the
.aij file should it become necessary.
However, if the .aij file or files are not available (for
example, they have been backed up), after-image journaling is
disabled.
See Show After_Journal for information on the format of a
journal-opts-file.
28.8.4.4 – Directory
Directory=directory-spec
Specifies the default directory for the database root and the
default directory for where the root can expect to find the
database storage areas and snapshot files.
See the Usage Notes for information on how this qualifier
interacts with the Root, File, and Snapshot qualifiers and for
warnings regarding restoring database files into a directory
owned by a resource identifier.
28.8.4.5 – Initialize Tsns
Initialize_Tsns
Noinitialize_Tsns
Initializes all transaction sequence number (TSN) values for
the entire database by setting the values to zero. Each time a
transaction is initiated against a database, a TSN is issued.
The numbers are incremented sequentially over the life of the
database.
TSN and CSN values are each contained in a quadword with the
following decimal format:
high longword : low longword
The high longword can hold a maximum user value of 32768
(215) and the low longword can hold a maximum user value of
4,294,967,295 (232). A portion of the high longword is used by
Oracle Rdb for overhead.
When you specify a TSN or CSN, you can omit the high longword and
the colon if the TSN or CSN fits in the low longword. For example
0:444 and 444 are both valid input values.
As your next TSN value approaches the maximum value allowed,
you should initialize the TSNs. You can determine the next TSN
and next commit sequence number (CSN) values by dumping the
database root file, using the RMU Dump command with the Header
and Option=Debug qualifiers.
The Initialize_Tsns qualifier takes much more time to execute
because all TSN values in the database are set to zero, which
requires writing to every page in the database. When the database
TSNs are reset, using the Initialize_Tsns qualifier, you should
use the After_Journal qualifier or the Aij_Options qualifier and
immediately perform a full database backup operation and create
a new .aij file. This ensures continuity of journaling and the
ability to recover the database.
The default Noinitialize_Tsns qualifier does not initialize the
database TSNs.
Note that you cannot use the Initialize_Tsns with the Set_Tsn
or Noset_Tsn qualifier in the same command. This restriction is
required because Initialize_Tsns directs RMU Restore Only_Root to
reset the TSN value to zero, while Set_Tsn directs RMU Restore
Only_Root to reset the TSN to the value you have indicated,
and Noset_Tsn leaves the TSN value unchanged. Never use the
Initialize_Tsns qualifier if Replication Option for Rdb transfers
have been defined for the database. The Initialize_Tsns qualifier
does not reset the Replication Option for Rdb transfers.
28.8.4.6 – Label
Label=(label-name-list)
Specifies the 1- to 6-character string with which the volumes
of the backup file have been labeled. The Label qualifier is
applicable only to tape volumes. You must specify one or more
label names when you use the Label qualifier.
You can specify a list of tape labels for multiple tapes. If you
list multiple tape label names, separate the names with commas,
and enclose the list of names within parentheses.
In a normal restore operation, the Label qualifier you specify
with the RMU Restore Only_Root command should be the same Label
qualifier you specified with the RMU Backup command you used to
back up your database.
The Label qualifier can be used with indirect file references.
See the Indirect-Command-Files help entry for more information.
28.8.4.7 – Librarian
Librarian=options
Use the Librarian qualifier to restore files from data archiving
software applications that support the Oracle Media Management
interface. The file name specified on the command line identifies
the stream of data to be retrieved from the Librarian utility. If
you supply a device specification or a version number it will be
ignored.
Oracle RMU supports retrieval using the Librarian qualifier only
for data that has been previously stored by Oracle RMU using the
Librarian qualifer.
The Librarian qualifier accepts the following options:
o Trace_file=file-specification
The Librarian utility writes trace data to the specified file.
o Level_Trace=n
Use this option as a debugging tool to specify the level of
trace data written by the Librarian utility. You can use a
pre-determined value of 0, 1, or 2, or a higher value defined
by the Librarian utility. The pre-determined values are :
- Level 0 traces all error conditions. This is the default.
- Level 1 traces the entry and exit from each Librarian
function.
- Level 2 traces the entry and exit from each Librarian
function, the value of all function parameters, and the
first 32 bytes of each read/write buffer, in hexadecimal.
o Logical_Names=(logical_name=equivalence-value,...)
You can use this option to specify a list of process logical
names that the Librarian utility can use to specify catalogs
or archives where Oracle Rdb backup files are stored,
Librarian debug logical names, and so on. See the specific
Librarian documentation for the definition of logical names.
The list of process logical names is defined by Oracle RMU
prior to the start of any Oracle RMU command that accesses the
Librarian application.
The following OpenVMS logical names must be defined for use with
a Librarian utility before you execute an Oracle RMU backup or
restore operation. Do not use the Logical_Names option provided
with the Librarian qualifier to define these logical names.
o RMU$LIBRARIAN_PATH
This logical name must be defined so that the shareable
Librarian image can be loaded and called by Oracle RMU backup
and restore operations. The translation must include the file
type (for example, .exe), and must not include a version
number. The shareable Librarian image must be an installed
(known) image. See the Librarian implementation documentation
for the name and location of this image and how it should be
installed.
o RMU$DEBUG_SBT
This logical name is not required. If it is defined, Oracle
RMU will display debug tracing information messages from
modules that make calls to the Librarian shareable image.
You cannot use device specific qualifiers such as Rewind,
Density, or Label with the Librarian qualifier because the
Librarian utility handles the storage meda, not Oracle RMU.
28.8.4.8 – Log
Log
Nolog
Specifies whether the processing of the command is reported
to SYS$OUTPUT. Specify the Log qualifier to request that the
progress of the restore operation be written to SYS$OUTPUT and
the Nolog qualifier to suppress this report. If you specify
neither, the default is the current setting of the DCL verify
switch. (The DCL SET VERIFY command controls the DCL verify
switch.)
28.8.4.9 – Media Loader
Media_Loader
Nomedia_Loader
Use the Media_Loader qualifier to specify that the tape device
from which the backup file is being read has a loader or stacker.
Use the Nomedia_Loader qualifier to specify that the tape device
does not have a loader or stacker.
By default, if a tape device has a loader or stacker, RMU Restore
Only_Root should recognize this fact. However, occasionally RMU
Restore Only_Root does not recognize that a tape device has a
loader or stacker. Therefore, when the first tape has been read,
RMU Restore Only_Root issues a request to the operator for the
next tape, instead of requesting the next tape from the loader
or stacker. Similarly, sometimes RMU Restore Only_Root behaves
as though a tape device has a loader or stacker when actually it
does not.
If you find that RMU Restore Only_Root is not recognizing that
your tape device has a loader or stacker, specify the Media_
Loader qualifier. If you find that RMU Restore Only_Root expects
a loader or stacker when it should not, specify the Nomedia_
Loader qualifier.
28.8.4.10 – New Snapshots
New_Snapshots
Nonew_Snapshots
Allows you to specify whether to create new snapshot (.snp) files
as part of a Restore Only_Root operation.
The default is the Nonew_Snapshots qualifier, which causes the
command to initialize the existing .snp files.
If you specify the New_Snapshots qualifier, the command creates
and initializes new .snp files. When you specify the New_
Snapshots qualifier, you should either delete the existing
.snp files before the restore operation or purge the .snp files
afterwards.
28.8.4.11 – Nodes Max
Nodes_Max=number-cluster-nodes
Specifies a new upper limit on the number of VMScluster nodes
from which users can access the restored database. The Nodes_Max
qualifier will accept values between 1 and 96 VMScluster nodes.
The actual maximum is the highest number of VMScluster nodes
possible in the current version of OpenVMS. The default value is
the limit defined for the database before it was backed up.
28.8.4.12 – Options
Options=file-spec
Specifies the options file that contains storage area names,
followed by the storage area qualifiers that you want applied to
that storage area.
You can direct RMU Restore Only_Root to create an options file
for use with this qualifier by specifying the Restore_Options
qualifier with the RMU Backup, RMU Dump, and RMU Dump Backup
commands. See Backup Database, Dump Database, and Dump Backup_
File for details.
If you create your own options file, do not separate the storage
area names with commas. Instead, put each storage area name on
a separate line in the file. The storage area qualifiers that
you can include in the options file are: Blocks_Per_Page, File,
Snapshot, and Thresholds. You can use the DCL line continuation
character, a hyphen (-), or the comment character (!) in the
options file. The default file extension is .opt. See Example 5
in the Examples help entry under this command.
28.8.4.13 – Rewind
Rewind
Norewind
Specifies whether the tape that contains the backup file will
be rewound before processing begins. The Norewind qualifier, the
default, causes the search for the backup file to begin at the
current tape position.
The Rewind and Norewind qualifiers are applicable only to tape
devices. RMU Restore Only_Root returns an error message if you
use these qualifiers and the device is not a tape device.
28.8.4.14 – Root
Root=root-file-spec
Requests that the database root (.rdb) be restored to the
specified location.
See the Usage Notes for information on how this qualifier
interacts with the Directory, File, and Snapshot qualifiers and
for warnings regarding restoring database files into a directory
owned by a resource identifier.
The Root qualifier is only meaningful when used with a multifile
database.
28.8.4.15 – Set Tsn
Set_Tsn=(Tsn=n, Csn=m)
Noset_Tsn
The Set_Tsn qualifier sets the database transaction sequence
number (TSN) and commit sequence number (CSN) to the specified
values. The correct value can be extracted from the original .rdb
file if it is still accessible, or from the last .aij file if one
is available. If that fails, you can use a TSN value larger than
the maximum number of transactions applied to the database since
it was created, or since TSNs were last initialized.
The TSN and CSN values do not have to be the same value. However,
you need to choose new values that are greater than the last
values assigned to a transaction. Set_Tsn values are expected
to be multiples of eight. If you specify a value that is not a
multiple of eight, RMU Restore Only_Root assigns the next highest
value that is a multiple of eight. (For example, if you specify
Set_Tsn=(Tsn=90, Csn=90), RMU Restore Only_Root assigns the Next
TSN a value of 96.)
The default value for the Set_Tsn qualifier is the TSN and CSN
values stored in the backup file plus 1,000,000 when TSNs are not
being initialized. The new TSN and CSN values for most database
applications should be larger than the number of transactions
committed since the database was last backed up. Set the TSN
and CSN values higher than this default increment value plus
the value in the backup file when needed. You can determine
the next TSN and CSN values by dumping the .rdb file, using the
Option=Debug qualifier.
The TSN and CSN values are each contained in a quadword with the
following decimal format:
high longword : low longword
The high longword can hold a maximum user value of 32768
(215) and the low longword can hold a maximum user value of
4,294,967,295 (232). A portion of the high longword is used by
Oracle Rdb for overhead.
When you specify a TSN or CSN, you can omit the high longword and
the colon if the TSN fits in the low longword. For example 0:444
and 444 are both valid TSN input values.
The Noset_Tsn qualifier specifies that the root will be restored
with the same TSN state as was recorded in the backup file.
When you use the Noset_Tsn qualifier in conjunction with the
Noupdate_Files qualifier, you can use a backup strategy that uses
recent by-area full backup files in place of a recent full and
complete backup file of the entire database. See Example 6 in the
Examples help entry under this command.
Note that you cannot use the Initialize_Tsns with the Set_Tsn
or Noset_Tsn qualifier in the same command. This restriction is
required because Initialize_Tsns directs RMU Restore Only_Root
to reset the TSN value to zero, while Set_Tsn directs RMU Restore
Only_Root to reset the TSN to the value you have indicated, and
Noset_Tsn leaves the TSN value unchanged.
28.8.4.16 – Transaction Mode=(mode-list)
Transaction_Mode=(mode-list)
Sets the allowable transaction modes for the database root file
created by the restore operation. The mode-list can include one
or more of the following transaction modes:
o All - Enables all transaction modes
o Current - Enables all transaction modes that are set for the
source database. This is the default transaction mode.
o None - Disables all transaction modes
o [No]Batch_Update
o [No]Read_Only
o [No]Exclusive
o [No]Exclusive_Read
o [No]Exclusive_Write
o [No]Protected
o [No]Protected_Read
o [No]Protected_Write
o [No]Read_Write
o [No]Shared
o [No]Shared_Read
o [No]Shared_Write
If you specify more than one transaction mode in the mode-list,
enclose the list in parenthesis and separate the transaction
modes from one another with a comma. Note the following:
o When you specify a negated transaction mode, for example
Noexclusive_Write, it indicates that exclusive write is not
an allowable access mode for the copied database.
o If you specify the Shared, Exclusive, or Protected transaction
mode, Oracle RMU assumes you are referring to both reading and
writing in that transaction mode.
o No mode is enabled unless you add that mode to the list, or
you use the All option to enable all transaction modes.
o You can list one transaction mode that enables or disables a
particular mode followed by another that does the opposite.
For example, Transaction_Mode=(Noshared_Write, Shared) is
ambiguous because the first value disables Shared_Write access
and the second value enables Shared_Write access. Oracle
RMU resolves the ambiguity by first enabling the modes as
specified in the modes-list and then disabling the modes as
specified in the modes-list. The order of items in the list is
irrelevant. In the example presented previously, Shared_Read
is enabled and Shared_Write is disabled.
28.8.4.17 – Update Files
Update_Files
Noupdate_Files
The Update_Files qualifier specifies that the root will be
restored, and RMU Restore Only_Root will attempt to link that
restored root to the area files. In addition, the snapshot (.snp)
file will be updated or created. This is the default.
The Noupdate_Files qualifier specifies that the restore operation
will restore the root, but it will not link that restored root
to any of the area files, nor will it create or update the .snp
files.
When you use the Noupdate_Files qualifier in conjunction with
the Noset_Tsn qualifier, you can use a backup strategy that uses
recent by-area full backup files in place of a recent full and
complete backup file of the entire database. See Example 6 in the
Examples help entry under this command
28.8.4.18 – Users Max
Users_Max=number-users
Specifies a new upper limit on the number of users that can
simultaneously access the restored database. The valid range is
between 1 and 2032 users. The default value is the value defined
for the database before it was backed up.
28.8.5 – File or Area Qualifiers
NOTE
Use these qualifiers to reconcile the information in the
database root file with the storage area files on disk.
These values can get out of synchronization when changes
have been made to storage areas or snapshot files after the
backup from which you are restoring the database root file
was created.
Setting these parameters updates the data in the root file
only; it does not change the attributes of the storage areas
or snapshot files themselves.
28.8.5.1 – Blocks Per Page
Blocks_Per_Page=integer
Noblocks_Per_Page
Updates the database root file with the number of blocks per
page for the storage area. Use this qualifier to update the root
when the blocks per page for a storage area has changed since
the backup file from which you are restoring was created. This
qualifier does not change the page size of a storage area itself;
its purpose is to update the database root file with corrected
information.
If you use the default, the Noblocks_Per_Page qualifier, RMU
Restore Only_Root takes the page size for the storage area from
the page size specified for the database you backed up. This is a
positional qualifier. This qualifier conflicts with storage areas
that have a uniform page format.
28.8.5.2 – File
File=file-spec
Updates the database root file with the file specification
for the storage-area-name parameter it qualifies. Use this
qualifier to update the root when the file specification for a
storage area has changed since the backup file from which you are
restoring the root was created. (For example, if you have used
the RMU Move_Area command since the backup file was created.)
This qualifier does not change the file specification of the
storage area it qualifies; its purpose is to update the database
root file with corrected information. When you specify the File
qualifier, you must supply a file name.
See the Usage Notes for information on how this qualifier
interacts with the Root, Snapshot, and Directory qualifiers.
This qualifier is not valid for single-file databases. This is a
positional qualifier.
28.8.5.3 – Read Only
Updates the database root file to reflect the read-only attribute
for the storage area it qualifies. Use this qualifier to update
the root when the read/write or read-only attribute has changed
since the backup file from which you are restoring has changed.
This qualifier does not change the attribute of the storage area
it qualifies; its purpose is to update the database root file
with corrected information.
If you do not specify the Read_Only or the Read_Write qualifier,
the storage areas is restored with the read/write attributes that
were in effect when the database was backed up.
28.8.5.4 – Read Write
Updates the database root file to reflect the read/write
attribute for the storage area it qualifies. Use this qualifier
to update the root when the read/write or read-only attribute
has changed since the backup file from which you are restoring
has changed. This qualifier does not change the attribute of the
storage area it qualifies; its purpose is to update the database
root file with corrected information.
If you do not specify the Read_Only or the Read_Write qualifier,
the storage areas is restored with the read/write attributes that
were in effect when the database was backed up.
28.8.5.5 – Snapshot
Snapshot=(Allocation=n,File=file-spec)
Updates the database root file to reflect the snapshot allocation
or snapshot file specification (or both) for the area it
qualifies. Use this qualifier to update the root when the
snapshot attributes have changed since the backup file from which
you are restoring the database root has changed. This qualifier
does not change the attributes of the snapshot file it qualifies;
its purpose is to update the database root file with corrected
information.
See the Usage Notes for information on how this qualifier
interacts with the Root, Snapshot, and Directory qualifiers.
The Snapshot qualifier is a positional qualifier.
When you do not specify the Snapshot qualifier, RMU Restore Only_
Root restores snapshot areas according to the information stored
in the backup file.
28.8.5.6 – Spams
Spams
Nospams
Updates the database root file to reflect the space area
management (SPAM) information for the storage areas in the
storage-area-list. Use this qualifier when the setting of SPAM
pages (enabled or disabled) has changed since the backup file
from which you are restoring the root was created. This qualifier
does not change the attributes of the storage area it qualifies;
its purpose is to update the database root file with corrected
information.
Use the Spams qualifier to update the root file information
to indicate that SPAM pages are enabled for the storage areas
qualified; use the Nospams qualifier to update the root file
information to indicate that SPAM pages are disabled for the
storage areas qualified. The default is to leave the attribute
unchanged from the setting recorded in the backup file. This is a
positional qualifier.
28.8.5.7 – Thresholds
Thresholds=(val1[,val2[,val3]])
Updates the database root file to reflect the threshold
information for the storage areas in the storage-area-list. Use
this qualifier when the threshold values have changed since the
backup file from which you are restoring the root was created.
This qualifier does not change the attributes of the storage area
it qualifies; its purpose is to update the database root file
with corrected information.
This is a positional qualifier.
The Thresholds qualifier applies only to storage areas with a
mixed page format.
If you do not use the Thresholds qualifier with the RMU Restore
Only_Root command, Oracle Rdb uses the storage area's thresholds
as recorded in the backup file.
See the Oracle Rdb7 Guide to Database Performance and Tuning for
more information on SPAM thresholds.
28.8.6 – Usage Notes
o To use the RMU Restore Only_Root command for a database, you
must have the RMU$RESTORE privilege in the root file access
control list (ACL) for the database or the OpenVMS SYSPRV or
BYPASS privilege.
o The RMU Restore Only_Root command provides two qualifiers,
Directory, and Root, that allow you to specify the target for
the restored database root file. In addition, the Directory,
File, and Snapshot file qualifiers allow you to specify a
target for updates to the database root for the storage
area and snapshot file locations. The target can be just a
directory, just a file name, or a directory and file name.
If you use all or some of these qualifiers, apply them as
follows:
- Use the Root qualifier to indicate the target for the
restored database root file.
- Use local application of the File qualifier to specify the
current location of a storage area file if its location
has changed since the database was backed up. The storage
area is not affected by this qualifier. This qualifier
updates the location of the storage area as recorded in the
database root file.
- Use local application of the Snapshots qualifier to specify
the current location of a snapshot file if its location
has changed since the database was backed up. The snapshot
file is not affected by this qualifier. This qualifier
updates the location of the snapshot file as recorded in
the database root file.
- Use the Directory qualifier to specify a default target
directory for the root file and as a default directory
for where the storage areas and snapshot files currently
reside. The default target directory is where the database
root file is restored if a directory specification is not
specified with the Root qualifier. The default directory
for the storage area and snapshot files is the directory
specification with which the root file is updated if these
files are not qualified with the Root, File, or Snapshot
qualifier. It is also the default directory with which the
Root file is updated for files qualified with the Root,
File, or Snapshot qualifier if these qualifiers do not
include a directory specification.
Note the following when using these qualifiers:
- Global application of the File qualifier when the target
specification includes a file name causes RMU Restore Only_
Root to update the file name recorded in the database root
file for all storage areas to be the same file name.
- Global application of the Snapshot qualifier when the
target specification includes a file name causes RMU
Restore Only_Root to update the file name recorded in the
database root file for all snapshot files to be the same
file name.
- Specifying a file name or extension with the Directory
qualifier is permitted, but causes RMU Restore Only_Root to
restore the database root file to the named directory and
file and update the file name recorded in the database root
file for all the storage areas and snapshot files to be the
same directory and file specification.
o When you restore a database root into a directory owned by
a resource identifier, the ACE for the directory is applied
to the database root file ACL first, and then the Oracle RMU
ACE is added. This method is employed to prevent database
users from overriding OpenVMS file security. However, this can
result in a database which you consider yours, but to which
you have no Oracle RMU privileges to access. See the Oracle
Rdb Guide to Database Maintenance for details.
o Only the database parameter values and the storage area
parameter values for which there are qualifiers can be updated
in the database root (.rdb) file using the restore-only-root
operation. All other database and storage area parameter
values that have changed since the database was last backed
up must be reapplied to the .rdb file using the SQL ALTER
DATABASE statement.
o There are no restrictions on the use of the Nospams qualifier
option with storage areas that have a mixed page format,
but the use of the Nospams qualifier typically causes severe
performance degradation. The Nospams qualifier is useful only
where updates are rare and batched, and access is primarily by
database key (dbkey).
o You must set both TSN and CSN values at the same time. You
cannot set the TSN value lower than the CSN value; however,
you can set a CSN value higher than the TSN value.
o The RMU Restore Only_Root command cannot be used if any
storage area has been extended since the backup operation
was done. You can use the RMU Dump Backup command with the
Option=Root qualifier to determine if this is the case.
28.8.7 – Examples
Example 1
To prevent corruption of your databases, check your CSN and TSN
values and set them to zero based on when they approach the
maximum. First, enter an RMU Dump command to display the next
CSN and next TSN values:
$ RMU/DUMP/HEADER=(SEQUENCE_NUMBERS) MF_PERSONNEL
.
.
.
Sequence Numbers...
- Transaction sequence number
Next number is 0:256
Group size is 0:32
- Commit sequence number
Next number is 0:256
Group size is 0:32
If the next CSN and the next TSN values are approaching the
maximum number allowed, you must perform the following operations
to initialize all TSN and CSN values to the value zero in your
database. The operation might take some time to execute as it
writes to every page in the database.
First, create a backup file for the database. Then restore
the database and initialize the CSN and TSN values with the
Initialize_Tsns qualifier. Then, enter an RMU Dump command again
to examine the next CSN and next TSN values. This example shows
that both values have been set to zero. If you displayed the
database pages, you would also notice that all TSN and CSN values
are set to zero.
$ RMU/BACKUP MF_PERSONNEL MF_PER_124.RBF
$ RMU/RESTORE/ONLY_ROOT /INITIALIZE_TSNS MF_PER_124.RBF
$ RMU/DUMP/HEADER=(SEQUENCE_NUMBERS) MF_PERSONNEL
.
.
.
Sequence Numbers...
- Transaction sequence number
Next number is 0:0
Group size is 0:32
- Commit sequence number
Next number is 0:0
Group size is 0:32
Example 2
Perform the following to set the TSN and CSN values to a number
that you select; a number that is greater than or equal to the
next CSN and next TSN values. If the number you have selected
is less than the next CSN and next TSN values recorded in the
database header, you receive an error as follows:
$ RMU/RESTORE/ONLY_ROOT/SET_TSN=(TSN=40,CSN=40)
_$ MF_PERSONNEL.RBF
%RMU-F-TSNLSSMIN, value (0:40) is less than minimum
allowed value (0:224) for /SET_TSN=TSN
%RMU-F-FTL_RSTR, Fatal error for RESTORE operation
at 18-JUN-1997 16:59:19.32
Enter a number equal to or greater than the next CSN and next TSN
values recorded in the database header:
$ RMU/RESTORE/ONLY_ROOT/SET_TSN=(TSN=274,CSN=274) -
_$ MF_PERSONNEL.RBF
Enter an RMU Dump command to see the next CSN and next TSN
values:
$ RMU/DUMP/HEADER=(SEQUENCE_NUMBERS) MF_PERSONNEL
.
.
.
Sequence Numbers...
- Transaction sequence number
Next number is 0:288
Group size is 0:32
- Commit sequence number
Next number is 0:288
Group size is 0:32
- Database bind sequence number
Next number is 0:288
Example 3
The following RMU Restore Only_Root command restores the database
root file from the database backup file (.rbf) to another device:
$ RMU/RESTORE/ONLY_ROOT/ROOT=DXXV9:[BIGLER.TESTING]MF_PERSONNEL -
_$ MF_PERSONNEL_BACKUP.RBF
The following DIRECTORY command confirms that the MF_
PERSONNEL.RDB file was restored in the specified directory:
$ DIRECTORY DXXV9:[BIGLER.TESTING]MF_PERSONNEL.RDB
Directory DXXV9:[BIGLER.TESTING]
MF_PERSONNEL.RDB;1 21-JAN-1991 14:37:36.87
Total of 1 file.
Example 4
Use the File=file-spec qualifier to update the .rdb file with a
storage area's new location. If you have moved a storage area to
a new location, use the File qualifier to show its new location
and the Snapshot qualifier to indicate the current version of
the area's snapshot (.snp) file. Enter the following RMU commands
to execute a series of operations that use the File and Snapshot
qualifiers in a restore-only-root operation to update the .rdb
file with new information since the database was last backed up.
Back up the database file:
$ RMU/BACKUP MF_PERSONNEL MFPERS_122.RBF.
Move the area to another directory:
$ RMU/MOVE_AREA MF_PERSONNEL JOBS -
_$ /FILE=[BIGLER.MFTEST.TEST1]JOBS.RDA
With the RMU Restore Only_Root command, give the area name, and
specify both the storage area file specification and its new
location. Also specify the snapshot (.snp) file with its correct
version. Note that .snp file version numbers increment with the
RMU Move_Area command.
$ RMU/RESTORE/ONLY_ROOT MFPERS_122.RBF JOBS -
_$ /FILE=[BIGLER.MFTEST.TEST1]JOBS.RDA -
_$ /SNAPSHOT=(FILE=[BIGLER.V41MFTEST]JOBS.SNP;2)
Display the .rdb file header and note that the file is correctly
updated.
The dump of the database root file lists these file
specifications:
$ RMU/DUMP/HEADER MF_PERSONNEL
DXXV9:[BIGLER.MFTEST.TEST1]JOBS.RDA;1
DXXV9:[BIGLER.MFTEST]JOBS.SNP;2
Verify the .rdb file to be certain that it has been properly
and completely updated relative to the files and their version
numbers that comprise the database.
$ RMU/VERIFY/ROOT MF_PERSONNEL
Example 5
The following command achieves the same results as the RMU
Restore Only_Root command in Example 4, but uses an options file
to specify the current location of the JOBS storage area and the
associated .snp file.
$ RMU/RESTORE/ONLY_ROOT MFPERS_122.RBF -
_$ JOBS/OPTIONS=OPTIONS_FILE.OPT
$ !
$ TYPE OPTIONS_FILE.OPT
JOBS /FILE=[BIGLER.V41MFTEST.TEST1]JOBS.RDA -
/SNAPSHOT=(FILE=BIGLER.V41MFTEST]JOBS.SNP)
Example 6
The following example demonstrates the use of the Noset_Tsn
qualifier and the Noupdate_Files qualifier to restore a database
using by-area backup files. In addition, it demonstrates the
automatic recovery feature of the RMU Restore command.
$ !
$ SET DEFAULT DISK1:[USER]
$ !
$ ! Create .aij files for the database. Because three .aij files are
$ ! created, fixed-size after-image journaling will be used.
$ !
$ RMU/SET AFTER_JOURNAL/ENABLE/RESERVE=4 -
_$ /ADD=(name=AIJ1, FILE=DISK2:[CORP]AIJ_ONE) -
_$ /ADD=(name=AIJ2, FILE=DISK2:[CORP]AIJ_TWO) -
_$ /ADD=(NAME=AIJ3, FILE=DISK2:[CORP]AIJ_THREE) -
_$ MF_PERSONNEL
%RMU-W-DOFULLBCK, full database backup should be done to
ensure future recovery
$ !
$ !
$ ! For the purposes of this example, assume the backup operation
$ ! recommended in the preceding warning message is done, but
$ ! that the time between this backup operation and the following
$ ! operations is several months so that this backup file is too
$ ! old to use in an efficient restore operation.
$ !
$ ! Update the DEPARTMENTS table.
$ !
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> --
SQL> -- On Monday, insert a new row in the DEPARTMENTS table. The
SQL> -- new row is stored in the DEPARTMENTS storage area.
SQL> --
SQL> INSERT INTO DEPARTMENTS
cont> (DEPARTMENT_CODE, DEPARTMENT_NAME, MANAGER_ID,
cont> BUDGET_PROJECTED, BUDGET_ACTUAL)
cont> VALUES ('WLNS', 'Wellness Center', '00188', 0, 0);
1 row inserted
SQL>
SQL> COMMIT;
SQL> DISCONNECT DEFAULT;
SQL> EXIT
$ !
$ ! Perform a by-area backup operation, including half of the
$ ! storage areas from the mf_personnel database.
$ !
$ RMU/BACKUP/INCLUDE=(RDB$SYSTEM, EMPIDS_LOW, EMPIDS_MID, -
_$ EMPIDS_OVER, DEPARTMENTS) MF_PERSONNEL -
_$ DISK3:[BACKUP]MONDAY_FULL.RBF
%RMU-I-NOTALLARE, Not all areas will be included in
this backup file
$ !
$ ! Update the SALARY_HISTORY table.
$ !
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> --
SQL> -- On Tuesday, one row is updated in the
SQL> -- SALARY_HISTORY storage area.
SQL> --
SQL> UPDATE SALARY_HISTORY
cont> SET SALARY_END ='20-JUL-1993 00:00:00.00'
cont> WHERE SALARY_START='14-JAN-1983 00:00:00.00'
cont> AND EMPLOYEE_ID = '00164';
1 row updated
SQL> COMMIT;
SQL> DISCONNECT DEFAULT;
SQL> EXIT
$ !
$ ! On Tuesday, back up the other half of the storage areas.
$ !
$ RMU/BACKUP/INCLUDE=(SALARY_HISTORY, JOBS, EMP_INFO, -
_$ MF_PERS_SEGSTR, RDB$SYSTEM) MF_PERSONNEL -
_$ DISK3:[BACKUP]TUESDAY_FULL.RBF
%RMU-I-NOTALLARE, Not all areas will be included in this
backup file
$ !
$ ! On Wednesday, perform additional updates.
$ !
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> --
SQL> -- Update another row in the SALARY_HISTORY table:
SQL> UPDATE SALARY_HISTORY
cont> SET SALARY_START ='23-SEP-1991 00:00:00.00'
cont> WHERE SALARY_START='21-SEP-1981 00:00:00.00'
cont> AND EMPLOYEE_ID = '00164';
1 row updated
SQL> COMMIT;
SQL> DISCONNECT DEFAULT;
SQL> EXIT
$ !
$ ! Assume the database is lost on Wednesday.
$ !
$ ! Restore the database root from the latest full-area backup file.
$ !
$ RMU/RESTORE/ONLY_ROOT/NOUPDATE_FILES/NOSET_TSN -
_$ DISK3:[BACKUP]TUESDAY_FULL.RBF/LOG
%RMU-I-AIJRSTBEG, restoring after-image journal "state" information
%RMU-I-AIJRSTJRN, restoring journal "AIJ1" information
%RMU-I-AIJRSTSEQ, journal sequence number is "0"
%RMU-I-AIJRSTSUC, journal "AIJ1" successfully restored from
file "DISK2:[CORP]AIJ_ONE.AIJ;1"
%RMU-I-AIJRSTJRN, restoring journal "AIJ2" information
%RMU-I-AIJRSTNMD, journal has not yet been modified
%RMU-I-AIJRSTSUC, journal "AIJ2" successfully restored from
file "DISK2:[CORP]AIJ_TWO.AIJ;1"
%RMU-I-AIJRSTJRN, restoring journal "AIJ3" information
%RMU-I-AIJRSTNMD, journal has not yet been modified
%RMU-I-AIJRSTSUC, journal "AIJ3" successfully restored from
file "DISK2:[CORP]AIJ_THREE.AIJ;1"
%RMU-I-AIJRSTEND, after-image journal "state" restoration complete
%RMU-I-RESTXT_00, Restored root file
DISK1:[USER]MF_PERSONNEL.RDB;1
%RMU-I-AIJRECBEG, recovering after-image journal "state" information
%RMU-I-AIJRSTAVL, 3 after-image journals available for use
%RMU-I-AIJRSTMOD, 1 after-image journal marked as "modified"
%RMU-I-LOGMODSTR, activated after-image journal "AIJ2"
%RMU-I-AIJISON, after-image journaling has been enabled
%RMU-W-DOFULLBCK, full database backup should be done to
ensure future recovery
%RMU-I-AIJRECEND, after-image journal "state" recovery complete
$ !
$ ! Restore the database areas, starting with the most recent
$ ! full-area backup file. (If the RDB$SYSTEM area is not in the
$ ! most recent full-area backup file, however, it must be restored
$ ! first.) Do not restore any area more than once.
$ !
$ ! Specify the Norecovery qualifier since there are additional
$ ! backup files to apply.
$ !
$ RMU/RESTORE/AREA/NOCDD/NORECOVER -
_$ DISK3:[BACKUP]TUESDAY_FULL.RBF -
_$ RDB$SYSTEM, SALARY_HISTORY, JOBS, -
_$ EMP_INFO, MF_PERS_SEGSTR/LOG
%RMU-I-RESTXT_21, Starting full restore of storage area
DISK1:[USER]MF_PERS_DEFAULT.RDA;1 at 18-JUN-1997 16:14:40.88
%RMU-I-RESTXT_21, Starting full restore of storage area
DISK1:[USER]SALARY_HISTORY.RDA;1 at 18-JUN-1997 16:14:41.28
%RMU-I-RESTXT_21, Starting full restore of storage area
DISK1:[USER]JOBS.RDA;1 at 18-JUN-1997 16:14:41.83
%RMU-I-RESTXT_21, Starting full restore of storage area
DISK1:[USER]EMP_INFO.RDA;1 at 18-JUN-1997 16:14:42.06
%RMU-I-RESTXT_21, Starting full restore of storage area
DISK1:[USER]MF_PERS_SEGSTR.RDA;1 at 18-JUN-1997 16:14:42.27
%RMU-I-RESTXT_24, Completed full restore of storage area
DISK1:[USER]JOBS.RDA;1 at 18-JUN-1997 16:14:42.49
%RMU-I-RESTXT_24, Completed full restore of storage area
DISK1:[USER]EMP_INFO.RDA;1 at 18-JUN-1997 16:14:42.74
.
.
.
%RMU-I-RESTXT_01, Initialized snapshot file
DISK1:[USER]MF_PERS_DEFAULT.SNP;1
%RMU-I-LOGINIFIL, contains 100 pages, each page
is 2 blocks long
%RMU-I-RESTXT_01, Initialized snapshot file
DISK1:[USER]EMP_INFO.SNP;1
%RMU-I-LOGINIFIL, contains 100 pages, each page
is 2 blocks long
.
.
.
%RMU-I-AIJWASON, AIJ journaling was active when
the database was backed up
%RMU-I-AIJRECFUL, Recovery of the entire database
starts with AIJ file sequence 0
%RMU-I-COMPLETED, RESTORE operation completed
at 18-JUN-1997 16:14:46.82
$ !
$ ! Complete restoring database areas by applying the most
$ ! recent full-area backup file. However, do not include
$ ! the RDB$SYSTEM table because that was already restored
$ ! in the previous restore operation. This restore
$ ! operation will attempt an automatic recovery of the .aij files.
$ !
$ RMU/RESTORE/AREA/NOCDD DISK3:[BACKUP]MONDAY_FULL.RBF -
_$ EMPIDS_LOW, EMPIDS_MID, EMPIDS_OVER, DEPARTMENTS/LOG
%RMU-I-RESTXT_21, Starting full restore of storage area
DISK1:[USER]EMPIDS_OVER.RDA;1 at 18-JUN-1997 16:20:05.08
%RMU-I-RESTXT_21, Starting full restore of storage area
DISK1:[USER]EMPIDS_MID.RDA;1 at 18-JUN-1997 16:20:05.40
%RMU-I-RESTXT_21, Starting full restore of storage area
DISK1:[USER]EMPIDS_LOW.RDA;1 at 18-JUN-1997 16:20:05.91
%RMU-I-RESTXT_21, Starting full restore of storage area
DISK1:[USER]DEPARTMENTS.RDA;1 at 18-JUN-1997 16:20:06.01
%RMU-I-RESTXT_24, Completed full restore of storage area
DISK1:[USER]EMPIDS_OVER.RDA;1 at 18-JUN-1997 16:20:06.24
.
.
.
%RMU-I-RESTXT_01, Initialized snapshot file
DISK1:[USER]DEPARTMENTS.SNP;1
%RMU-I-LOGINIFIL, contains 100 pages, each page
is 2 blocks long
%RMU-I-RESTXT_01, Initialized snapshot file
DISK1:[USER]EMPIDS_LOW.SNP;1
%RMU-I-LOGINIFIL, contains 100 pages, each page
is 2 blocks long
.
.
.
%RMU-I-AIJWASON, AIJ journaling was active when
the database was backed up
%RMU-I-AIJRECFUL, Recovery of the entire database
starts with AIJ file sequence 0
%RMU-I-AIJRECARE, Recovery of area DEPARTMENTS starts
with AIJ file sequence 0
%RMU-I-AIJRECARE, Recovery of area EMPIDS_LOW starts
with AIJ file sequence 0
%RMU-I-AIJRECARE, Recovery of area EMPIDS_MID starts
with AIJ file sequence 0
%RMU-I-AIJRECARE, Recovery of area EMPIDS_OVER starts
with AIJ file sequence 0
%RMU-I-AIJBADAREA, inconsistent storage area
DISK1:[USER]DEPARTMENTS.RDA;1 needs AIJ sequence number 0
%RMU-I-AIJBADAREA, inconsistent storage area
DISK1:[USER]EMPIDS_LOW.RDA;1 needs AIJ sequence number 0
.
.
.
%RMU-I-LOGRECDB, recovering database file
DISK1:[USER]MF_PERSONNEL.RDB;1
%RMU-I-AIJAUTOREC, starting automatic after-image
journal recovery
%RMU-I-LOGOPNAIJ, opened journal file DISK2:[CORP]AIJ_ONE.AIJ;1
%RMU-I-AIJONEDONE, AIJ file sequence 0 roll-forward
operations completed
%RMU-I-LOGRECOVR, 1 transaction committed
%RMU-I-LOGRECOVR, 0 transactions rolled back
%RMU-I-LOGRECOVR, 2 transactions ignored
%RMU-I-AIJNOACTIVE, there are no active transactions
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJALLDONE, after-image journal roll-forward
operations completed
%RMU-I-LOGSUMMARY, total 1 transaction committed
%RMU-I-LOGSUMMARY, total 0 transactions rolled back
%RMU-I-LOGSUMMARY, total 2 transactions ignored
%RMU-I-AIJSUCCES, database recovery completed successfully
%RMU-I-AIJGOODAREA, storage area
DISK1:[USER]DEPARTMENTS.RDA;1 is now consistent
%RMU-I-AIJGOODAREA, storage area
DISK1:[USER]EMPIDS_LOW.RDA;1 is now consistent
%RMU-I-AIJGOODAREA, storage area
DISK1:[USER]EMPIDS_MID.RDA;1 is now consistent
.
.
.
%RMU-I-AIJFNLSEQ, to start another AIJ file recovery,
the sequence number needed will be 0
%RMU-I-COMPLETED, RESTORE operation completed at
18-JUN-1997 16:20:11.45
$ !
$ ! The database is now restored and recovered. However, if
$ ! for some reason the automatic .aij file recovery was not
$ ! possible (for example, if you had backed up the .aij files),
$ ! apply the .aij files in the same order in
$ ! which they were created. That is, if .aij files were backed
$ ! up each night, apply aij_mon.aij first and aij_tues.aij second.
Example 7
The following example demonstrates the use of the Directory,
File, and Root qualifiers. First, the database is backed up, then
a couple storage area files and a snapshot file are moved. The
restore-only-root operation does the following:
o The default directory is specified as DISK2:[DIR].
o The target directory and file name for the database root file
is specified with the Root qualifier. The target directory
specified with the Root qualifier overrides the default
directory specified with the Directory qualifier. Thus, the
RMU Restore Only_Root process restores the database root in
DISK3:[ROOT] and names it COPYRDB.RDB.
o The target directory for the EMPIDS_MID storage area is
DISK4:[FILE]. The RMU Restore Only_Root process updates the
database root file to indicate that EMPIDS_MID currently
resides in DISK4:[FILE].
o The target for the EMPIDS_MID snapshot file is
DISK5:[SNAP]EMPIDS_MID.SNP Thus, the RMU Restore Only_
Root process updates the database root file to indicate
that the EMPIDS_MID snapshot file currently resides in
DISK5:[SNAP]EMPIDS_MID.SNP.
o The target file name for the EMPIDS_LOW storage area is
EMPIDS. Thus, the RMU Restore Only_Root process updates
the database root file to indicate that the EMPIDS_LOW
storage area currently resides in the DISK2 default directory
(specified with the Directory qualifier), and the file is
currently named EMPIDS.RDA.
o The target for the EMPIDS_LOW snapshot file is
DISK5:[SNAP]EMPIDS.SNP. Thus, the RMU Restore Only_
Root process updates the database root file to indicate
that the EMPIDS_LOW snapshot file currently resides in
DISK5:[SNAP]EMPIDS.SNP.
o Data for all the other storage area files and snapshot files
remain unchanged in the database root file.
$ ! Back up the database:
$ !
$ RMU/BACKUP MF_PERSONNEL.RDB MF_PERSONNEL.RBF
$ !
$ ! Move a couple of storage areas and a snapshot file:
$ !
$ RMU/MOVE_AREA MF_PERSONNEL.RDB -
_$ /DIRECTORY=DISK2:[DIR] -
_$ EMPIDS_MID/FILE=DISK4:[FILE] -
_$ /SNAPSHOT=(FILE=DISK3:[SNAP]EMPIDS_MID.SNP), -
_$ EMPIDS_LOW/FILE=EMPIDS -
_$ /SNAPSHOT=(FILE=DISK5:[SNAP]EMPIDS.SNP)
$ !
$ ! Database root is lost. Restore the root and update the
$ ! locations of the moved storage areas and snapshot file as
$ ! recorded in the database root file because the locations
$ ! recorded in the backup file from which the root is restored
$ ! are not up-to-date:
$ !
$ RMU/RESTORE/ONLY_ROOT MF_PERSONNEL.RBF -
_$ /ROOT=DISK3:[ROOT]MF_PERSONNEL.RDB -
_$ EMPIDS_MID/FILE=DISK4:[FILE] -
_$ /SNAPSHOT=(FILE=DISK2:[DIR]EMPIDS_MID.SNP), -
_$ EMPIDS_LOW/FILE=DISK2:[DIR]EMPIDS -
_$ /SNAPSHOT=(FILE=DISK5:[SNAP]EMPIDS.SNP)
29 – Server After Journal
There are three RMU Server After_Journal commands, as follows:
o The RMU Server After_Journal Start command starts the AIJ log
server (ALS).
o The RMU Server After_Journal Stop command stops the ALS.
o The RMU Server After_Journal Reopen_Output command allows you
to close and reopen the output file specified with the RMU
Server After_Journal Start command.
29.1 – Reopen Output
Allows you to close the current AIJ log server (ALS) output file
for the specified database and open a new one. This allows you to
see the current contents of the original ALS output file.
29.1.1 – Description
The RMU Server After_Journal Reopen_Output command allows you
to reopen an ALS output file that was previously created with an
RMU Server After_Journal Start command with the Output qualifier.
(The ALS output file is opened for exclusive access by the ALS
process.)
Reopening the output file results in the current output file
being closed and a new output file being created. The new output
file has the same file name as the original output file, but its
version number is incremented by one.
The ALS is an optional process that flushes log data to the
after-image journal (.aij) file. All database servers deposit
transaction log data in a cache located in the database global
section. If the ALS is active, it continuously flushes the log
data to disk. Otherwise, server processes might block temporarily
if the cache in the global section is full.
29.1.2 – Format
(B)0[mRMU/Server After_Journal Reopen_Output root-file-spec
29.1.3 – Parameters
29.1.3.1 – root-file-spec
Specifies the database root file for which you want to reopen the
ALS output file.
29.1.4 – Usage Notes
o To use the RMU Server After_Journal Reopen_Output command for
a database, you must have RMU$OPEN privilege in the root file
access control list (ACL) for the database or the OpenVMS
WORLD privilege.
o To issue the RMU Server After_Journal Reopen_Output command
successfully, the database must be opened. Other users can be
attached to the database when this command is issued.
o To determine whether the ALS is running, use the RMU Show
Users command.
29.1.5 – Examples
Example 1
In the following example the first Oracle RMU command starts the
log server and specifies an output file. The second Oracle RMU
command reopens the ALS output file, so you can view the data
that is contained in the ALS output file so far.
$ RMU/SERVER AFTER_JOURNAL START MF_PERSONNEL/OUT=ALS
$ ! Database updates occur
$ RMU/SERVER AFTER_JOURNAL REOPEN_OUTPUT MF_PERSONNEL
$ ! View the ALS.OUT;-1 file:
$ TYPE ALS.OUT;-1
--------------------------------------------------------------------
16-OCT-1995 13:02:05.21 - Oracle Rdb V7.0-00 database utility started
---------------------------------------------------------------------
.
.
.
29.2 – Start
Allows you to manually start the AIJ log server (ALS) for the
specified database and specify a file for the AIJ log server
output.
29.2.1 – Description
The ALS is an optional process that flushes log data to the
after-image journal (.aij) file. All database servers deposit
transaction log data in a cache located in the database global
section. If the ALS is active, it continuously flushes the log
data to disk. Otherwise, server processes might block temporarily
if the cache in the global section is full. The ALS should be
started only when AIJ processing is a bottleneck. Typically,
multiuser databases with medium to high update activity can
benefit from using the ALS.
You can start the ALS either manually, using the RMU Server
After_Journal Start command, or automatically when the database
is opened (by specifying LOG SERVER IS AUTOMATIC in the SQL ALTER
DATABASE command). By default, the ALS startup is set to manual.
29.2.2 – Format
(B)0[mRMU/Server After_Journal Start root-file-spec
[4mCommand[m [4mQualifier[m x [4mDefault[m
x
/Output=file-spec x See description
29.2.3 – Parameters
29.2.3.1 – root-file-spec
Specifies the database root file for which you want to start the
ALS.
29.2.4 – Command Qualifiers
29.2.4.1 – Output
Output=file-spec
Specifies the file for the ALS output file. Use this qualifier
in anticipation of issuing an RMU Server After_Journal Reopen_
Output command. By specifying the output file, you will know the
location of, and therefore can view, the ALS output file.
By default, the ALS output file is not available to the user.
29.2.5 – Usage Notes
o To use the RMU Server After_Journal Start command for a
database, you must have RMU$OPEN privilege in the root file
access control list (ACL) for the database or the OpenVMS
WORLD privilege.
o The ALS can be started only if the database is open and if
after-image journaling is enabled.
o The RMU Server After_Journal Start command can be issued while
users are attached to the database.
o If the ALS process stops abnormally, regardless of whether the
current setting of the ALS is automatic or manual, the only
way to restart it is to use the RMU Server After_Journal Start
command.
o To determine whether the ALS is running, use the RMU Show
Users command.
o Any errors encountered when you try to start the ALS are
logged in the monitor log file. Use the RMU Show System
command to find the location of the monitor log file.
29.2.6 – Examples
Example 1
The following Oracle RMU command starts the log server.
$ RMU/SERVER AFTER_JOURNAL START MF_PERSONNEL
29.3 – Stop
Allows you to manually stop the AIJ log server (ALS) for the
specified database.
29.3.1 – Description
The ALS is an optional process that flushes log data to the
after-image journal (.aij) file. All database servers deposit
transaction log data in a cache located in the database global
section. If the ALS is active, it continuously flushes the log
data to disk. Otherwise, server processes might block temporarily
if the cache in the global section is full.
29.3.2 – Format
(B)0[mRMU/Server After_Journal Stop root-file-spec [4mCommand[m [4mQualifiers[m [4mDefaults[m /Output=file-name See description
29.3.3 – Parameters
29.3.3.1 – root-file-spec
Specifies the database root file for which you want to stop the
ALS.
29.3.4 – Command Qualifiers
29.3.4.1 – Output
Output=file-name
Allows you to specify the file where the operational log is to be
created. The operational log records the transmission and receipt
of network messages.
If you do not include a directory specification with the
file name, the log file is created in the database root file
directory. It is invalid to include a node name as part of the
file name specification.
Note that all Hot Standby bugcheck dumps are written to the
corresponding bugcheck dump file; bugcheck dumps are not written
to the file you specify with the Output qualifier.
29.3.5 – Usage Notes
o To use the RMU Server After_Journal Stop command for a
database, you must have RMU$OPEN privilege in the root file
access control list (ACL) for the database or the OpenVMS
WORLD privilege.
o To issue the RMU Server After_Journal Stop command
sucessfully, the database must be open. Other users can be
attached to the database.
o If the ALS process stops abnormally, regardless of whether the
current setting of the ALS is automatic or manual, the only
way to restart it is to use the RMU Server After_Journal Start
command.
o To determine whether the ALS is running, use the RMU Show
Users command.
o If database replication is active and you attempt to stop
the database AIJ log server, Oracle Rdb returns an error. You
must stop database replication before attempting to stop the
server.
29.3.6 – Examples
Example 1
The following example stops the log server.
$ RMU/SERVER AFTER_JOURNAL STOP MF_PERSONNEL
30 – Server Backup Journal
There are two RMU Server Backup_Journal commands, as follows:
o The RMU Server Backup_Journal Suspend command suspends .aij
backup operations
o The RMU Server Backup_Journal Resume command allows .aij
backup operations to resume after they have been suspended.
30.1 – Resume
Allows you to reinstate the ability to perform AIJ backup
operations after they have been manually suspended with the RMU
Server Backup_Journal Suspend command.
30.1.1 – Description
When you issue the RMU Server Backup_Journal Suspend command,
after-image journal (AIJ) backup operations are temporarily
suspended. Use the RMU Server Backup_Journal Resume command to
reinstate the ability to backup .aij files.
The RMU Server Backup_Journal Resume command must be issued from
the same node from which AIJ backup operations were originally
suspended. If you attempt to resume AIJ backup operations from
another database node, the following errors are returned:
%RDMS-F-CANTRESUMEABS, error resuming AIJ backup operations
-RDMS-F-ABSNSUSPENDED, AIJ backup operations not suspended
%RMU-F-FATALRDB, Fatal error while accessing Oracle Rdb.
30.1.2 – Format
(B)0[mRMU/Server Backup_Journal Resume root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Log x Current DCL verify value
30.1.3 – Parameters
30.1.3.1 – root-file-spec
Specifies the database root file for which you want to resume AIJ
backup operations.
30.1.4 – Command Qualifiers
30.1.4.1 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request log output and
the Nolog qualifier to prevent it. If you specify neither, the
default is the current setting of the DCL verify switch. (The DCL
SET VERIFY command controls the DCL verify switch.)
30.1.5 – Usage Notes
o To use the RMU Server Backup_Journal Resume command for a
database, you must have RMU$OPEN privilege in the root file
access control list (ACL) for the database or the OpenVMS
WORLD privilege.
o To determine whether AIJ backup operations have been
suspended, use the RMU Show Users command.
30.1.6 – Examples
Example 1
The following example demonstrates how to reinstate the ability
to perform backup operations.
$ RMU/SERVER BACKUP_JOURNAL RESUME MF_PERSONNEL.RDB
30.2 – Suspend
Allows you to temporarily suspend .aij backup operations on all
database nodes. While suspended, you cannot back up .aij files
manually (with the RMU Backup After_Journal command) nor will the
AIJ backup server (ABS) perform .aij backup operations.
30.2.1 – Description
When you issue the RMU Server Backup_Journal Suspend command,
after-image journal (AIJ) backup operations are temporarily
suspended. However, the suspended state is not stored in the
database root file. Thus, if the node from which the AIJ backup
operations were suspended fails, then AIJ backup operations by
the AIJ Backup Server (ABS) are automatically resumed (assuming
the ABS was running prior to the suspension).
The purpose of RMU Server Backup_Journal Suspend command is to
temporarily suspend AIJ backup operations during a period of
time when backing up .aij files would prevent subsequent commands
from operating properly. For example, if you have a Hot Standby
database, the time from when the master database is backed up
to the time that database replication could commence might be
long. During this period, .aij backup operations would prevent
the replication from starting. (See the Oracle Rdb7 and Oracle
CODASYL DBMS: Guide to Hot Standby Databases for information on
Hot Standby databases.)
The solution to this problem is to use the RMU Server Backup_
Journal Suspend command to suspend AIJ backups from the time
just prior to beginning the database backup until after database
replication commences.
AIJ backup operations are suspended until any of the following
events occur:
o The database is closed on the node from which AIJ backup
operations were suspended.
o The node fails from which AIJ backup operations were
suspended.
o Database replication is started on the node from which AIJ
backup operations were suspended, as a master database.
o AIJ backup operations are explicitly resumed on the node from
which AIJ backup operations were suspended. (This occurs when
you issue the RMU Server Backup_Journal Resume command. See
the Server_Backup_Journal help entry for details.)
30.2.2 – Format
(B)0[mRMU/Server Backup_Journal Suspend root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefault[m
x
/[No]Log x Current DCL verify value
30.2.3 – Parameters
30.2.3.1 – root-file-spec
Specifies the database root file for which you want to suspend
AIJ backup operations.
30.2.4 – Command Qualifiers
30.2.4.1 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request log output and
the Nolog qualifier to prevent it. If you specify neither, the
default is the current setting of the DCL verify switch. (The DCL
SET VERIFY command controls the DCL verify switch.)
30.2.5 – Usage Notes
o To use the RMU Server Backup_Journal Suspend command for a
database, you must have RMU$OPEN privilege in the root file
access control list (ACL) for the database or the OpenVMS
WORLD privilege.
o To determine whether AIJ backup operations have been
suspended, use the RMU Show Users command.
30.2.6 – Examples
Example 1
The following example first suspends .aij backup operations, then
issues the RMU Show Users command to confirm that suspension has
occurred. If you attempt an .aij backup operation, you receive
the %RMU-F-LCKCNFLCT error message.
$ RMU/SERVER BACKUP_JOURNAL SUSPEND MF_PERSONNEL.RDB
$ RMU/SHOW USERS MF_PERSONNEL.RDB
. . .
* After-image backup operations temporarily suspended
from this node
. . .
$ RMU/BACKUP/AFTER_JOURNAL MF_PERSONNEL.RDB AIJ_BACKUP.AIJ
%RMU-F-LCKCNFLCT, lock conflict on AIJ backup
31 – Server Record Cache
Server Record_Cache Checkpoint
Allows the database administrator to force the Record Cache
Server (RCS) process to checkpoint all modified rows from cache
back to the database.
31.1 – Description
When you use row caches, it is possible for a large number
of database records to be modified in row cache areas. These
modified records must be written to the physical database files
on disk at various times, such as backing up or verifying
the database, or when closing the database. The RMU Server
Record_Cache Checkpoint command causes the RCS process to
immediately write all modified records from all row cache areas
back to the physical database files on disk.
If there are a large number of modified records to be written
back to the database, this operation can take a long time.
31.2 – Format
(B)0[m RMU/Server Record_Cache Checkpoint root-file-spec
[4mCommand[m [4mQualifier[m x [4mDefaults[m
x
/[No]Log x Current DCL verify value
/[No]Wait x /NoWait
31.3 – Parameters
31.3.1 – root-file-spec
Specifies the database root file for which you want to checkpoint
all modified rows.
31.4 – Command Qualifiers
31.4.1 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request log output and
the Nolog qualifier to prevent it. If you specify neither, the
default is the current setting of the DCL verify switch. (The DCL
SET VERIFY command controls the DCL verify switch.)
31.4.2 – Wait
Wait
Nowait
Specifies whether the Oracle RMU operation completes right away
(Nowait) or whether RMU waits for the record cache server to
complete the checkpoint before returning to the user. The default
is Nowait.
32 – Set
32.1 – After Journal
Allows you to do any of the following with respect to after-image
journal (.aij) files:
o Enable or disable after-image journaling.
o Alter an .aij file (occurs only if .aij file is re-created).
o Add, drop, modify, or reserve .aij files.
o Suppress the use of an .aij file.
o Add AIJ caches.
o Set the initial .aij file allocation.
o Set the .aij file extent (for extensible journals).
o Enable or disable .aij file overwriting.
o Send OpenVMS operator communication manager (OPCOM) messages
when specific after-image journal events occur.
o Set the shutdown timeout period.
NOTE
Prior to Oracle Rdb Version 6.0, the ability to alter an
.aij file name was provided through the RdbALTER DEPOSIT
ROOT command. Beginning with Oracle Rdb Version 6.0, the
RdbALTER DEPOSIT ROOT command no longer provides this
capability; use the Alter qualifier with the RMU Set After_
Journal command instead.
32.1.1 – Description
Many of the RMU Set After_Journal functions are also available
through the use of the following SQL ALTER DATABASE clauses:
ADD JOURNAL clause
DROP JOURNAL clause
ALTER JOURNAL clause
32.1.2 – Format
(B)0[mRMU/Set After_Journal root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Add=(keyword[,...]) x No journals added
/Aij_Options=OptionsFile x None
/Allocation=number-blocks x See description
/Alter=(keyword[,...]) x No journals altered
/Backups=(keyword_list) x See description
/[No]Cache=file x See description
/Disable x None
/Drop=(Name=name) x No journals deleted
/Enable x None
/Extent=number-blocks x See description
/[No]Log x Current DCL verify value
/[No]Notify=(operator-class-list) x See description
/[No]Overwrite x None
/Reserve=number-journals x None
/Shutdown_Timeout=minutes x 60 minutes
/Suppress=(Name=name) x No journals suppressed
/Switch_Journal x None
32.1.3 – Parameters
32.1.3.1 – root-file-spec
Specifies the database root file for which you want to enable
journaling or set .aij file characteristics.
32.1.4 – Command Qualifiers
32.1.4.1 – Add
Add=(keyword, ...)
Adds an .aij file to the after-image journal file configuration.
You can add an .aij file while users are attached to the
database. If you specify the Suppress, Drop, or Alter qualifiers
in the same RMU Set After_Journal command, they are processed
before the Add qualifier. The Add qualifier can appear several
times in the same command.
Specify an .aij file to add by using the following keywords:
o Name=name
Specifies a unique name for the after-image journal object
to be added. An after-image journal object is the .aij file
specification plus all of its attributes, such as allocation,
extent, and backup file name.
This keyword is required.
o File=file
Specifies the file for the journal to be added. This keyword
is required. If you do not provide a full file specification,
and only the file name, the file is placed in your current
directory. If more than one journal resides in the same
directory, each journal must have a unique file name. However,
each fixed-size journal file should be located on a separate
device. This minimizes risks associated with journal loss or
unavailability should a device fail or be brought off line.
For example, if two or more journal files reside on the same
failed device, the loss of information or its unavailability
is far greater than that of a single journal file.
o Backup_File=file
Specifies the file to be used for automatic backup operations.
This keyword is optional. If you specify a file name, but
not a file extension, the .aij file extension is used by
default. If you supply only a file name (not a complete file
specification), the backed up .aij file is placed in the
database root file directory.
o Edit_Filename=(option)
Specifies an edit string to apply to the backup file
when an .aij is backed up automatically. This keyword is
optional. However, if it is specified, the Backup_File=file
keyword must be specified also. When you specify the Edit_
Filename=(options) keyword, the .aij backup file name is
modified by appending the options you specify.
See the description of the Edit_Filename keyword for the
Backups qualifier for a list of the available keyword options.
This keyword and the options you specify affect the backup
file name of the .aij file specified with the associated Name
keyword only. If you want the same edit string applied to all
backed up .aij files, you might find it more efficient to use
the Backups qualifier with the Edit_Filename keyword instead
of the Add qualifier with the Edit_Filename keyword.
If you use a combination of the Edit_Filename keyword with the
Add qualifier and the Edit_Filename keyword with the Backups
qualifier, the Add qualifier keyword takes precedence over the
Backups qualifier keyword for the named .aij file. In other
words, the options you specify with Edit_Filename keyword
to the Backups qualifier are applied to all backed up .aij
files except those for which you explicitly specify the Edit_
Filename keyword with the Add qualifier. See Example 6.
This keyword is useful for creating meaningful file names for
your backup files and makes file management easier.
o Allocation=number-blocks
Sets the initial size, in disk blocks, of the .aij file. If
this keyword is omitted, the default allocation is used.
The minimum valid value is 512, the maximum value is eight
million. The default is 512.
See the Oracle Rdb Guide to Database Maintenance for guidance
on setting the allocation size.
o Extent=number-blocks
Specifies the maximum size to extend an .aij file if it is,
or becomes, an extensible .aij file (in blocks). (If the
number of available after-image journal files falls to one,
extensible journaling is employed.)
If there is insufficient free space on the .aij file device,
the journal is extended using a smaller extension value than
specified. However, the minimum, and default, extension size
is 512 blocks.
See the Oracle Rdb Guide to Database Maintenance for guidance
on setting the extent size.
32.1.4.2 – AIJ Options
AIJ_Options=OptionsFile
Specifies an options file name. The default extension is .opt.
The OptionsFile is the same as that generated by an RMU Show
After_Journal command and is also used by the RMU Copy_Database,
Move_Area, Restore, and Restore Only_Root commands. The AIJ_
Options qualifier may be used alone or in combination with other
RMU Set After_Journal command qualifiers.
32.1.4.3 – Allocation
Allocation=number-blocks
Sets the default .aij file allocation. You can change the
allocation while users are attached to the database. If the
Allocation qualifier is omitted, the default allocation is
unchanged.
The minimum value you can specify is 512. The default is also
512.
See the Oracle Rdb Guide to Database Maintenance for guidance on
setting the allocation size.
32.1.4.4 – Alter
Alter=(keyword,...)
Specifies that an after-image journal object be altered.
You can alter an after-image journal object while users are
attached to the database. The Alter qualifier can be used
several times within the same RMU Set After_Journal command.
If you specify a previously suppressed .aij file with the
Alter qualifier, that named .aij file is unsuppressed. Oracle
RMU performs this unsuppress action as soon as the command is
processed.
The changes specified by the Alter qualifier are stored in the
database root file (and thus are visible in the dump file if you
issue an RMU Dump command), but the changes are not applied to
the .aij file until it is re-created (or backed up, in the case
of the Backup_File= file keyword). A new extensible .aij file is
re-created, for example, when the following are true:
o Fast commit is enabled.
o Extensible after-image journaling is being used.
o Users are actively updating the database.
o You issue an RMU Backup After_Journal command with the
Noquiet_Point qualifier.
Backing up an extensible .aij file does not ensure that a new
.aij file will be created. In most cases, the existing .aij file
is truncated and reused.
Specify an after-image journal object to alter by using the
following keywords:
o Name=name
Specifies the name of the after-image journal object. This
is a required keyword that must match the name of an existing
after-image journal object.
o File=file
This option only takes effect if a journal is, or becomes,
an extensible .aij file and only when that journal is re-
created. This option allows you to supply a new .aij file
specification to be used for the extensible .aij file if and
when it is re-created. This can be used to move the re-created
.aij file to a new location. If you do not provide a full file
specification, and only the file name, the file is placed in
your current directory. See the general description of the
Alter qualifier for an example of when an extensible .aij file
is re-created.
This option cannot be used to move a fixed-size .aij file. To
move a fixed-size .aij file, you must first create a new .aij
file and then drop the existing .aij file.
This keyword is optional.
o Backup_File=file
Specifies a new file to be used for automatic backup
operations.
This keyword is optional.
o Edit_Filename=(options)
Specifies a new edit string to apply to the backup file
name of the named .aij file when the .aij is backed up
automatically. This keyword is optional. See the description
of the Edit_Filename keyword for the Backups qualifier for a
list of the available keyword options.
o Allocation=number-blocks
Specifies the initial size of the .aij file that is re-created
if that file is, or becomes, a fixed-size .aij file.
o Extent=number-blocks
Specifies the extent size of the .aij file that is re-created
if it is, or becomes, extensible.
See the Oracle Rdb Guide to Database Maintenance for guidance
on setting the extent size.
32.1.4.5 – Backups
Backups=(keyword_list)
Specifies options to control the AIJ backup server. You can
select one or more of the following keywords:
o Automatic
Specifies that the AIJ backup server will run automatically,
as required. You cannot specify both the Automatic and Manual
keywords. If neither the Automatic nor the Manual keyword is
specified, the backup server state is unchanged.
o Manual
Specifies that the RMU Backup After_Journal command will be
used to back up the .aij files. The AIJ backup server will
not run automatically. You cannot specify both Automatic
and Manual keywords. If neither the Automatic nor the Manual
keyword is specified, the backup server state is unchanged.
o Backup_File=file
Specifies a default file specification for the AIJ backup
server to use as the backup file name if no backup file name
is associated with the .aij file to be backed up.
o Nobackup_File
Specifies that there is no default backup file specification.
Omission of this keyword retains the current default backup
file specification.
o Edit_Filename=(options)
The Edit_Filename keyword specifies an edit string to apply
to .aij files when they are backed up automatically. When
the Edit_Filename=(options) keyword is used, the .aij backup
file names are edited by appending any or all of the values
specified by the following options to the backup file name:
- Day_Of_Year
The current day of the year expressed as a 3-digit integer
(001 to 366).
- Day_Of_Month
The current day of the month expressed as a 2-digit integer
(01 to 31).
- Hour
The current hour of the day expressed as a 2-digit integer
(00 to 23).
- Julian_Date
The number of days passed since 17-Nov-1858.
- Minute
The current minute of the hour expressed as a 2-digit
integer (00 to 59).
- Month
The current month expressed as a 2-digit integer (01 to
12).
- Sequence
The journal sequence number of the first journal in the
backup operation.
- Vno
Synonymous with the Sequence option. See the description of
the Sequence option.
- Year
The current year (A.D.) expressed as a 4-digit integer.
If you specify more than one option, place a comma between
each option.
The edit is performed in the order specified. For example, the
file backup.aij and the keyword EDIT_FILENAME=(HOUR, MINUTE,
MONTH, DAY_OF_MONTH, SEQUENCE) creates a file with the name
backup_160504233.aij when journal 3 is backed up at 4:05 P.M.
on April 23rd.
You can make the name more readable by inserting quoted
strings between each Edit_Filename option. For example, the
option shown in the following code adds the string "$30_0155-
2" to the .aij file name if the day of the month is the 30th,
the time is 1:55 and the version number is 2:
/EDIT_FILENAME=("$",DAY_OF_MONTH,"_",HOUR,MINUTE,"-",SEQUENCE)
This keyword is useful for creating meaningful file names for
your backup files and makes file management easier.
If you use a combination of the Edit_Filename keyword with
the Add qualifier and the Edit_Filename keyword with the
Backups qualifier, the Add qualifier keyword takes precedence
over the Backups qualifier keyword for the named .aij file.
In other words, the options you specify with Edit_Filename
keyword to the Backups qualifier are applied to all .aij back
up files except those for which you explicitly specify the
Edit_Filename keyword with the Add qualifier. See Example 6.
o Quiet_Point
Specifies that the after-image journal backup operation is
to acquire the quiet-point lock prior to performing an .aij
backup operation for the specified database. This option
(as with all the other Backup options) affects only the
database specified in the RMU Set After_Journal command line.
For information on specifying that the quiet-point lock be
acquired before any .aij backup operation is performed on a
system, see the Usage Notes.
o Noquiet_Point
Specifies that the after-image journal backup operation will
not acquire the quiet-point lock prior to performing an .aij
backup operation for the specified database. This option (as
with all the other Backup options) affects only the database
specified in the RMU Set After_Journal command line. For
information on specifying that the quiet-point lock will not
be acquired prior to any .aij backup operations performed on a
system, see the Usage Notes.
32.1.4.6 – Cache
Cache=file
Nocache
Specifies an after-image journal cache file specification on a
solid-state disk. If the Cache qualifier is specified, after-
image journal caches are enabled. If you specify a file name, but
not a file extension, the file extension .aij is used by default.
If the Nocache qualifier is specified, AIJ caches are disabled.
You can use this qualifier only when users are detached from the
database.
This file must be written to a solid-state disk. If a solid-state
disk is not available, after-image journal caching should not be
used. Unless you are involved in a high performance, high-volume
environment, you probably do not need the features provided by
this qualifier.
You can determine whether the cache file is accessible by
executing the RMU Dump command with the Header qualifier. If
caching is enabled, but the cache file is unavailable, the cache
file is marked inaccessible and after-image journaling continues
as if caching was disabled. Once the cache file has been marked
inaccessible, it will remain so marked until either the existing
cache file is dropped from the database, or a new cache file is
added to the database (even if this is the same cache file as was
previously used).
If this qualifier is omitted, the AIJ cache state remains
unchanged.
32.1.4.7 – Disable
Disable
Disables after-image journaling if it has already been enabled.
If after-image journaling has already been disabled, this
qualifier has no effect. You can specify the Disable qualifier
only when users are detached from the database.
When the Disable qualifier and other qualifiers are specified
with the RMU Set After_Journal command, after-image journaling is
disabled before other requested operations.
There is no default for the Disable qualifier. If you do not
specify either the Disable or Enable qualifier, the after-image
journaling state remains unchanged.
32.1.4.8 – Drop
Drop=(Name=name)
Specifies that the named after-image journal object be deleted.
You can drop an after-image journal object while users are
attached to the database, but the named after-image journal
object must not be the current .aij file or be waiting to be
backed up. When the Drop qualifier is specified with the Alter
or Add qualifiers on the RMU Set After_Journal command, the named
after-image journal object is dropped before any after-image
journal objects are altered or added.
Each after-image journal object to be deleted is specified by
the required keyword, Name=name. This specifies the name of the
after-image journal object to be dropped, which must match the
name of an existing after-image journal object.
32.1.4.9 – Enable
Enable
Enables after-image journaling if it has been disabled. You can
specify the Enable qualifier only when users are detached from
the database and at least one unmodified .aij file is available
(unless you also specify the Overwrite qualifier). After-image
journaling is enabled after other specified qualifiers have been
processed.
32.1.4.10 – Extent
Extent=number-blocks
Sets the size, in blocks, of the default .aij file extension.
This qualifier has no effect on fixed-length .aij files. This
qualifier can be used while users are attached to the database.
The minimum valid number-blocks value is 512. The default is also
512.
If the Extent qualifier is omitted, the default extension remains
unchanged.
See the Oracle Rdb Guide to Database Maintenance for guidance on
setting the extent size.
32.1.4.11 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request log output and
the Nolog qualifier to prevent it. If you specify neither, the
default is the current setting of the DCL verify switch. (The DCL
SET VERIFY command controls the DCL verify switch.)
32.1.4.12 – Notify
Notify=(operator-class-list)
Nonotify
Sets the operator notification state for after-image journaling
and selects the operators to be notified when the journaling
state changes. Oracle RMU uses the OpenVMS operator communication
manager (OPCOM). The following events evoke operator
notification:
o An error writing to an .aij file.
o No .aij file is available for write operations.
o The .aij file has been overwritten.
o The RMU Backup After_Journal command fails.
You can use this qualifier while users are attached to the
database. If you specify the Nonotify qualifier, operator
notification is disabled. If the qualifier is omitted, the
operator notification state is unchanged.
The operator classes follow:
o [No]All
The All operator class broadcasts a message to all terminals
that are attached to the system or cluster. These terminals
must be turned on and have broadcast-message reception
enabled. The Noall operator class inhibits the display of
messages to the entire system or cluster.
o [No]Central
The Central operator class broadcasts messages to the central
system operator. The Nocentral operator class inhibits the
display of messages to the central system operator.
o [No]Disks
The Disks operator class broadcasts messages pertaining to
mounting and dismounting disk volumes. The Nodisks operator
class inhibits the display of messages pertaining to mounting
and dismounting disk volumes.
o [No]Cluster
The Cluster operator class broadcasts messages from the
connection manager pertaining to cluster state changes. The
Nocluster operator class inhibits the display of messages from
the connection manager pertaining to cluster state changes.
o [No]Security
The Security operator class displays messages pertaining to
security events. The Nosecurity operator class inhibits the
display of messages pertaining to security events.
o [No]Oper1 through [No]Oper12
The Oper1 through Oper12 operator classes display messages
to operators identified as OPER1 through OPER12. The Nooper1
through Nooper12 operator classes inhibit messages from being
sent to the specified operator.
NOTE
Use the Notify qualifier conservatively. Be sure that
messages regarding a private database are not broadcast
to an entire system or cluster of users who may not be
interested in the broadcast information. Similarly, be
conservative regarding even a clusterwide database. You
do not want to overload the operators with insignificant
messages.
32.1.4.13 – Overwrite
Overwrite
Nooverwrite
The Overwrite qualifier specifies that .aij files can be
overwritten without first being backed up. The Nooverwrite
qualifier specifies that only an .aij file that has been backed
up can be overwritten. You can specify the Nooverwrite qualifier
only when users are detached from the database. If you do
not specify either the Overwrite qualifier or the Nooverwrite
qualifier, the Overwrite characteristic remains unchanged.
This qualifier is ignored if only one .aij file is available.
When you specify the Overwrite qualifier, it is only activated
when two or more .aij files are, or become, available.
Note that if you use the Overwrite qualifier, you will be unable
to perform a rollforward from a restored backup file. Most users
will not want to use the Overwrite qualifier; it is provided for
layered applications that might want to take advantage of some
performance features provided by Oracle Rdb that require after-
image journaling, but where the use of after-image journaling is
not required for the application to run reliably.
32.1.4.14 – Reserve
Reserve=number-journals
Reserves additional space in the after-image journal
configuration for the specified number of .aij files. You can
specify the Reserve qualifier only when users are detached from
the database. If you do not specify the Reserve qualifier, no
space is reserved for additional .aij files.
Note that you cannot reserve space in a single-file database for
.aij files by using this qualifier with the RMU Set After_Journal
command. After-image journal file reservations for a single-
file database can be made only when you use the RMU Convert, RMU
Restore, or RMU Copy_Database commands.
Note that once you reserve space in the journal configuration
(using the Reserve=n qualifier), the reservations are permanent.
There is no way to unreserve this space unless you back up and
restore the database. Specify fewer reservations with RMU Restore
command After_Journal qualifier.
Each reservation uses two blocks of space in the root file and
the run-time global sections.
When you reserve journals slots to create additional journals
for your journal system, the reserve operation is not journaled.
Therefore, you should perform a full database backup operation to
ensure database consistency.
32.1.4.15 – Shutdown Timeout
Shutdown_Timeout=minutes
Modifies the after-image journal shutdown time in the event that
after-image journaling becomes unavailable. The after-image
journaling shutdown time is the period, in minutes, between
the point when after-image journaling becomes unavailable and
the point when the database is shut down. During the after-
image journaling shutdown period, all database update activity
is stalled.
If operator notification has been enabled, operator messages are
broadcast to all enabled operator classes and to the RMU Show
Statistics screen at 1-minute intervals.
To recover from the after-image journaling shutdown state
and to resume normal database operations, you must make an
.aij file available for use. You can do this by backing up an
existing modified journal, or, if you have a journal reservation
available, by adding a new journal to the after-image journaling
configuration. If you do not make a journal available before the
after-image journal shutdown time expires, the database is shut
down and all active database attaches are terminated.
The after-image journaling shutdown period is only in effect when
fixed-size AIJ journaling is used. When a single extensible .aij
file is used, the default action is to shut down all database
operations when the .aij file becomes unavailable.
If you do not specify the Shutdown_Timeout qualifier, the
database shuts down 60 minutes after the after-image journaling
configuration becomes unavailable. The maximum value you can
specify for the Shutdown_Timeout qualifier is 4320 minutes (3
days).
32.1.4.16 – Suppress
Suppress=(Name=name)
Prevents further use of the named after-image journal object. The
named after-image journal object must be an existing after-image
journal object.
This qualifier is useful when you want to temporarily disallow
the use of an .aij file. For example, suppose the disk containing
the next .aij file to use goes off line. You do not want the
database to attempt to access that file until the disk is back on
line. Use the Suppress qualifier so the database does not attempt
to access the specified .aij file. When the disk is back on line,
use the RMU Set After_Journal command with the Alter qualifier
to unsuppress the after-image journal object that references this
.aij file.
You can specify the Suppress qualifier while users are attached
to the database, but the .aij file referenced by the after-image
journal object must not be the current journal or be waiting
to be backed up. You must back up the referenced .aij file
before the after-image journal object that references it can
be suppressed.
The Suppress qualifier is processed prior to any Drop, Add, or
Alter qualifiers specified with the same command.
32.1.4.17 – Switch Journal
Switch_Journal
Changes the currently active .aij file to the next available .aij
file in a fixed-size after-image journaling configuration.
In an extensible journal file configuration, the Switch_Journal
qualifier has no effect and is ignored if specified.
The Switch_Journal qualifier is useful for forcing a switch to an
.aij file on another disk when you want to perform maintenance on
the disk containing the currently active journal file.
You cannot specify the Switch_Journal qualifier and the Enable
or the Disable qualifier on the same command line. In addition,
after-image journaling must be enabled when you issue the Switch_
Journal qualifier.
It is seldom necessary to specify this option because normally a
switch occurs automatically.
32.1.5 – Usage Notes
o You must have the RMU$ALTER, RMU$BACKUP, or RMU$RESTORE
privilege in the root file access control list (ACL) for the
database or the OpenVMS SYSPRV or BYPASS privilege to use the
RMU Set After_Journal command.
o Use the RMU Dump command with the Header qualifier to see if
after-image journaling additions or changes you have made have
been recorded as you expect. However, note that although the
AIJ attributes change as you specify, the changed .aij file
might be flagged as unmodified in the dump of the header. This
occurs because the transaction containing your changes to the
.aij file is captured in the current .aij file, not the .aij
file for which you specified modifications.
o When you use RMU Set After_Journal to specify a fixed-size
journal configuration, specify a different disk for each
.aij file, if possible. Using this method, you can suppress
a journal on a given disk if that disk should start to fail.
o If the disk fails on which the current .aij file resides,
Oracle Rdb immediately starts using a new .aij file if your
journal configuration contains more than one journal. For
example, if AIJ_DISK1 contains AIJ_ONE, the current .aij file,
and AIJ_DISK1 fails, Oracle Rdb will immediately start using
AIJ_TWO, the .aij file on AIJ_DISK2.
o Execute a full database backup operation after issuing an RMU
Set After_Journal command that displays the RMU-W-DOFULLBCK
warning message (such as a command that includes the Reserve
or the Enable qualifier).
o Use the Alter qualifier to unsuppress an .aij file that has
been suppressed with the Suppress qualifier.
o Use the Backup=(Quiet_Point) qualifier to specify that the
quiet-point lock must be acquired prior to performing an
.aij backup operation for the specified database. (Use the
Backup=(Noquiet_Point) qualifier to specify that the quiet-
point lock will not be acquired prior to an .aij backup
operation for the specified database.)
o Use the RDM$BIND_ABS_QUIET_POINT logical to specify whether or
not the quiet-point lock must be acquired prior to performing
any .aij backup operation on any database on a cluster.
Define the value for the logical to be 1 to specify that the
quiet-point lock must be acquired prior to performing .aij
backup operations; define the value to be 0 to specify that
the quiet-point lock need not be acquired prior to .aij backup
operations. You must define this logical in the system table
on all nodes in the cluster as shown in the following example:
$ DEFINE/SYSTEM RDM$BIND_ABS_QUIET_POINT 1
o The selection of which journal in a set of fixed-size journal
files is used by Oracle RMU is unpredictable and depends on
availability. For example, while a journal is temporarily
unavailable, it cannot be selected as the next journal file.
Thus, a journal file might be reused before all journals in
the set have been used once.
32.1.6 – Examples
Example 1
The following command reserves space for three .aij files, adds
two .aij files to the mf_personnel database, and then enables
after-image journaling:
$ RMU/SET AFTER_JOURNAL/ENABLE/RESERVE=3 -
_$ /ADD=(NAME=AIJ2, FILE=DISK1:[JOURNAL]AIJ_TWO) -
_$ /ADD=(NAME=AIJ3, FILE=DISK2:[JOURNAL]AIJ_THREE) -
_$ MF_PERSONNEL
%RMU-W-DOFULLBCK, full database backup should be done to
ensure future recovery
Example 2
The following example demonstrates how to switch the current .aij
file from DISK1:[DB]AIJ1 to the next available journal file in a
fixed-size journal configuration, and then suppress the original
journal in anticipation of maintenance on the disk that contains
it. The last Oracle RMU command moves AIJ1 to a new disk and
implicitly unsuppresses it.
$ RMU/DUMP/HEADER=(JOURNAL) MF_PERSONNEL
.
.
.
AIJ Journaling...
- After-image journaling is enabled
- Database is configured for 5 journals
- Reserved journal count is 5
- Available journal count is 3
- Journal switches to next available when full
- 1 journal has been modified with transaction data
- 2 journals can be created while database is active
- Journal "AIJ1" is current
- All journals are accessible
.
.
.
$ RMU/SET AFTER_JOURNAL/SWITCH_JOURNAL MF_PERSONNEL/LOG
%RMU-I-OPERNOTIFY, system operator notification: Oracle Rdb Database
USER1:[DB]MF_PERSONNEL.RDB;1 Event Notification
After-image journal 0 switch-over in progress (to 1)
%RMU-I-OPERNOTIFY, system operator notification: Oracle Rdb Database
USER1:[DB]MF_PERSONNEL.RDB;1 Event Notification
After-image journal switch-over complete
%RMU-I-LOGMODSTR, switching to after-image journal "AIJ2"
.
.
.
$ RMU/BACKUP/AFTER_JOURNAL MF_PERSONNEL DISK1:[DB]AIJ1_BCK/LOG
%RMU-I-AIJBCKBEG, beginning after-image journal backup operation
%RMU-I-OPERNOTIFY, system operator notification: Oracle Rdb Database
USER1:[DB]MF_PERSONNEL.RDB;1 Event Notification
AIJ backup operation started
%RMU-I-AIJBCKSEQ, backing up after-image journal sequence number 2
%RMU-I-LOGBCKAIJ, backing up after-image journal AIJ1 at 10:59:58.83
%RMU-I-LOGCREBCK, created backup file DISK1:[DB]AIJ1_BCK.AIJ;1
%RMU-I-OPERNOTIFY, system operator notification: Oracle Rdb Database
USER1:[DB]MF_PERSONNEL.RDB;1 Event Notification
AIJ backup operation completed
%RMU-I-AIJBCKEND, after-image journal backup operation completed
successfully
%RMU-I-LOGAIJJRN, backed up 1 after-image journal at 11:00:02.59
%RMU-I-LOGAIJBLK, backed up 254 after-image journal blocks
at 11:00:02.59
$ RMU/SET AFTER_JOURNAL/SUPPRESS=(NAME=AIJ1) MF_PERSONNEL/LOG
%RMU-I-LOGMODSTR, suppressed after-image journal "AIJ1"
$ RMU/SET AFTER_JOURNAL MF_PERSONNEL -
_$ /ALTER=(NAME=AIJ1,FILE=DISK2:[DB]AIJ1)/LOG
%RMU-I-LOGMODSTR, unsuppressed after-image journal "AIJ1"
Example 3
The following example turns on the automatic backup server for
.aij files and defines a default backup file name:
$ RMU/SET AFTER_JOURNAL /BACKUPS=(AUTOMATIC, -
_$ BACKUP_FILE=DISK:[AIJ_BACKUPS]AIJ_BACKUP.AIJ) -
_$ DB$DISK:[DIRECTORY]MF_PERSONNEL.RDB
Example 4
The following example turns off the automatic backup server for
.aij files and removes the default backup file name:
$ RMU/SET AFTER_JOURNAL /BACKUPS=(MANUAL,NOBACKUP_FILE) -
_$ DB$DISK:[DIRECTORY]MF_PERSONNEL.RDB
Example 5
The following example changes the .aij backup file name without
changing the setting of the AIJ backup server:
$ RMU/SET AFTER_JOURNAL /BACKUPS= -
_$ (BACKUP_FILE=NEW_DISK:[AIJ_BACKUPS]BETTER_BACKUP_NAME.AIJ) -
_$ DB$DISK:[DIRECTORY]MF_PERSONNEL.RDB
Example 6
The following example sets a local and a global edit string for
.aij backup files. When AIJ_ONE is backed up, it is appended with
the string _LOCAL. When AIJ_TWO or AIJ_THREE are backed up, they
are appended with the string _GLOBAL. Although it is unlikely
that you would select these edit strings, they demonstrate the
behavior of the Edit_Filename keyword when it is used with the
Backup qualifier (global effect) versus the behavior of the Edit_
Filename keyword when it is used with the Add qualifier (local
effect).
$ RMU/SET AFTER_JOURNAL/ENABLE/RESERVE=5 -
_$ /BACKUP=EDIT_FILENAME=("_GLOBAL")/ADD=(NAME=AIJ1, -
_$ FILE=DISK1:[AIJS]AIJ_ONE, -
_$ BACKUP_FILE=AIJ1BCK, -
_$ EDIT_FILENAME=("_LOCAL")) -
_$ /ADD=(NAME=AIJ2, -
_$ FILE=DISK1:[AIJS]AIJ_TWO, -
_$ BACKUP_FILE=AIJ2BCK) -
_$ /ADD=(NAME=AIJ3, -
_$ FILE=DISK1:[AIJS]AIJ_THREE, -
_$ BACKUP_FILE=AIJ3BCK) -
_$ MF_PERSONNEL
$ !
$ ! After these .aij files are backed up:
$ !
$ DIR .AIJ
AIJ1BCK_LOCAL.AIJ;1
AIJ2BCK_GLOBAL.AIJ;1
AIJ3BCK_GLOBAL.AIJ;1
AIJ_ONE.AIJ;1
AIJ_THREE.AIJ;1
AIJ_TWO.AIJ;1
32.2 – AIP
Allows the user to modify the contents of the AIP (Area Inventory
Pages) structure. The AIP structure provides a mapping for
logical areas to physical areas as well describing each of those
logical areas. Information such as the logical area name, length
of the stored record, and storage thresholds can now be modified
using this simple command interface.
32.2.1 – Description
This RMU command is used to modify some attributes of an existing
logical area. It cannot be used to add or delete a logical area.
This command can be used to correct the record length, thresholds
and name of a logical area described by an AIP entry. It can also
be used to rebuild the SPAM pages for a logical area stored in
UNIFORM page format areas so that threshold settings for a page
correctly reflect the definition of the table.
See also the RMU Repair Spam command for information on
rebuilding SPAM pages for MIXED areas.
32.2.2 – Format
(B)0[mRMU/Set AIP root-file-spec [larea-name]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Larea=(n [,...]) x See description
/Length[=n] x See description
/Log x See description
/Rebuild_Spams x See description
/Rename_To=new-name x See description
/Threshold=(p,q,r) x See description
32.2.3 – Parameters
32.2.3.1 – root-file-spec
The file specification for the database root file to be
processed. The default file extension is .rdb.
32.2.3.2 – larea-name
An optional parameter that allows the logical areas to be
selected by name. Only those AIP entries are processed.
Any partitioned index or table will create multiple logical areas
all sharing the same name. This string may contain standard
OpenVMS wildcard characters (% and *) so that different names
can be matched. Therefore, it is possible for many logical areas
to match this name.
The value of larea-name may be delimited so that mixed case
characters, punctuation and various character sets can be used.
32.2.4 – Command Qualifiers
32.2.4.1 – Larea
Larea=(n [,...])
Specifies a list of logical area identifiers. The LAREA qualifier
and larea-name parameter are mutually exclusive.
32.2.4.2 – Length
Length[=value]
Sets the length of the logical area. If no value is provided on
the RMU Set AIP command, then Oracle Rdb will find the matching
table and calculate a revised AIP nominal record length and apply
it to the AIP.
32.2.4.3 – Log
Log
Logs the names and identifiers of logical areas modified by this
command.
32.2.4.4 – Rebuild Spams
Rebuild_Spams
Locate each logical area with the "rebuild-spam" flag set and
rebuild the SPAM pages.
32.2.4.5 – Rename To
Rename_To=new-name
Used to change the logical area name. This qualifier should be
used with caution as some RMU commands assume a strict mapping
between table/index names and names of the logical area. This
command can be used to repair names that were created in older
versions of Oracle Rdb where the rename table command did not
propagate the change to the AIP. The value of new-name may be
delimited so that mixed case, punctuation and various character
sets can be used.
32.2.4.6 – Threshold
Threshold=(t1 [,t2 [, t3]])
Changes the threshold on all logical areas specified using
the Larea qualifier or the larea-name parameter. RMU accepts
THRESHOLD=(0,0,0) as a valid setting to disable logical area
thresholds. Values must be in the range 0 through 100. Any
missing values default to 100.
32.2.5 – Usage Notes
o The database administrator requires RMU$ALTER privilege to run
the command and the Rdb server also requires SELECT and ALTER
privilege on the database.
o This command supersedes the RMU Repair Initialize=Larea_
Parameters command that can also change the Thresholds and
Length for a logical area. This command can be executed
online, whereas the RMU Repair command must be run offline.
o Wildcard names are not permitted with the following qualifiers
to prevent accidental propagation of values to the wrong
database objects.
- LENGTH qualifier with a value specified,
- RENAME_TO qualifier,
- and THRESHOLDS qualifier.
o RMU Set AIP may be used on a master database configured for
HOT STANDBY. All AIP changes and SPAM rebuild actions are
written to the after image journal and will be applied to the
standby database. This command cannot be applied to a STANDBY
database.
o THRESHOLDS for MIXED format areas are physical area attributes
and are not supported at the logical area (aka AIP) level.
Therefore, THRESHOLDS can not be applied to MIXED areas and
specifying logical areas will cause an exception to be raised.
o The REBUILD_SPAMS qualifier is only applied to logical areas
stored in UNIFORM page format storage areas.
o This command will implicitly commit any changes with no
opportunity to undo them using rollback. Access to the
functionality is controlled by privileges at the RMU and Rdb
database level. We suggest that RMU Show AIP be used prior to
any change so that you can compare the results and repeat the
RMU Set AIP command with corrections if necessary.
Some wildcard operations are restricted to prevent accidental
damage to the database. For instance, a wildcard matching
many objects will be rejected if more than one type of object
is being changed. If a wildcard selects both table and index
types then this command will be rejected.
o This command is an online command. Each logical area will be
processed within a single transaction and interact with other
online users.
o When the AIP entry is changed online, any existing users of
the table or index will start to use the new values if the
logical areas are reloaded.
o Various SQL alter commands will register changes for the AIP
and these are applied at COMMIT time. RMU Verify and RMU Show
AIP Option=REBUILD_SPAMS will report any logical areas that
require SPAM rebuilding. The database administrator can also
examine the output from the RMU Dump Larea=RDB$AIP command.
o How long can the SPAM rebuild be delayed? The fullness of
some page will have been calculated using the old AIP length
or THRESHOLD values. Therefore, it might appear that a page
is full when in fact the revised length will fit on the
page, or the page may appear to have sufficient free space
to store a row but once accessed the space is not available.
By rebuilding SPAM pages, you may reduce I/O during insert
operations. However, delaying the rebuild to a convenient time
will not affect the integrity of the database.
o The amount of I/O required for Rebuild_Spams depends upon
the number of pages allocated to the table or index involved.
Assuming just one logical area is selected then Oracle Rdb
will read the ABM (Area Bitmap) to locate all SPAM pages in
that area that reference this logical area. Rdb will then
read each page in the SPAM interval for that SPAM page and
recalculate the fullness based on the rows stored on each
page.
32.2.6 – Examples
Example 1
RMU will call Rdb for each logical area that requires rebuilding.
$ RMU/SET AIP/REBUILD_SPAMS MF_PERSONNEL
%RMU-I-AIPSELMOD, Logical area id 86, name ACCOUNT_AUDIT selected for
modification
%RMU-I-AIPSELMOD, Logical area id 94, name DEPARTMENTS_INDEX selected for
modification
Example 2
RMU will request that the EMPLOYEES table length be updated
in the AIP. Oracle Rdb will use the latest table layout to
calculate the length in the AIP and write this back to the AIP.
The EMPLOYEES table is partitioned across three storage areas and
therefore the Log qualifier shows these three logical areas being
updated.
$ RMU/SET AIP MF_PERSONNEL EMPLOYEES/LENGTH/LOG
%RMU-I-AIPSELMOD, Logical area id 80, name EMPLOYEES selected for modification
%RMU-I-AIPSELMOD, Logical area id 81, name EMPLOYEES selected for modification
%RMU-I-AIPSELMOD, Logical area id 82, name EMPLOYEES selected for modification
Example 3
RMU will request that the EMPLOYEES table length be updated
in the AIP and then the SPAM pages will be rebuilt. This is an
ONLINE operation. Note: there is an implied relationship between
the logical area name and the name of the object. This example
assumes that the EMPLOYEES object is mapped to a UNIFORM page
format area.
$ RMU/SET AIP MF_PERSONNEL EMPLOYEES/LENGTH/REBUILD_SPAMS
Example 4
When Thresholds for an index are modified they will not be
effective until the SPAM pages are updated (rebuilt) to use these
new values. The following example shows that index maintenance
performed by SQL. The SET FLAGS command is used to display
information about the change. Note that the change is applied at
COMMIT time and that the SPAM rebuild is deferred until a later
time. RMU Set AIP is then used to rebuild the SPAM pages.
$ SQL$
SQL> set flags 'index_stats';
SQL> alter index candidates_sorted store in rdb$system (thresholds are (32,56,
77));
~Ai alter index "CANDIDATES_SORTED" (hashed=0, ordered=0)
~Ai larea length is 215
~As locking table "CANDIDATES" (PR -> PU)
~Ai: reads: async 0 synch 58, writes: async 8 synch 0
SQL> commit;
%RDMS-I-LOGMODVAL, modified space management thresholds to (32%, 56%, 77%)
%RDMS-W-REBUILDSPAMS, SPAM pages should be rebuilt for logical area
CANDIDATES_SORTED
$
$ RMU/SET AIP MF_PERSONNEL CANDIDATES_SORTED/REBUILD_SPAMS/LOG
%RMU-I-AIPSELMOD, Logical area id 74, name CANDIDATES_SORTED selected for
modification
32.3 – Audit
Enables Oracle Rdb security auditing. When security auditing is
enabled, Oracle Rdb sends security alarm messages to terminals
that have been enabled as security operators and makes entries
in the database's security audit journal whenever specified audit
events are detected.
32.3.1 – Description
The RMU Set Audit command is the Oracle Rdb equivalent to the
DCL SET AUDIT command. Because Oracle Rdb security auditing uses
many OpenVMS system-level auditing mechanisms, certain auditing
characteristics (such as /FAILURE_MODE) can only be set and
modified by using the DCL SET AUDIT command, which requires the
OpenVMS SECURITY privilege.
32.3.2 – Format
(B)0[mRMU/Set Audit root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Disable=enable-disable-options x See description
/Enable=enable-disable-options x See description
/[No]Every x /Every
/First x Synonym for /Noevery
/[No]Flush x /Noflush
/Start x See description
/Stop x See description
/Type={Alarm|Audit} x Alarm and Audit
32.3.3 – Parameters
32.3.3.1 – root-file-spec
The file specification of the database root for which auditing
information will be modified.
32.3.4 – Command Qualifiers
32.3.4.1 – Disable
Disable=enable-disable-options
Disables security auditing for the specified audit event classes.
To disable alarms and audits for all classes, specify the All
option. You can also selectively disable alarms and audits for
one or more classes that are currently enabled. You must specify
at least one class when you specify the Disable qualifier. See
the Enable qualifier description for a list of the classes you
can specify with the Disable qualifier.
When you specify audit classes with the Disable qualifier, the
events you specify are immediately disabled. For other audit
events that have not been explicitly disabled with the Disable
qualifier, records continue to be recorded in the security
audit journal and alarms continue to be sent to security-enabled
terminals, as specified.
When processing the RMU Set Audit command, Oracle Rdb processes
the Disable qualifier last. If you accidentally specify both
Enable and Disable for the same event type in the same command,
the Disable qualifier prevails.
32.3.4.2 – Enable
Enable=enable-disable-options
Enables security auditing for the specified audit event classes.
To enable alarms and audits for all events, specify the All
option. You can also selectively enable alarms and audits for
one or more classes that are currently disabled. You must specify
at least one class when you specify the Enable qualifier.
When you specify audit classes with the Enable qualifier, the
audit events you specify are immediately enabled, so that audit
events of currently attached users are recorded in the security
audit journal and alarms are sent to security-enabled terminals,
as specified.
With the Enable and Disable qualifiers, you can specify one or
more of the following six valid class options: All, Daccess,
Daccess=object-type, Identifier=(identifier-list), Protection,
and Rmu. If you specify more than one class, separate the classes
with commas, and enclose the list of classes within parentheses.
The following list provides a description of each option:
o All
Enables or disables all possible audit event classes.
o Daccess
Enables or disables DACCESS (discretionary access) audit
events.
A DACCESS audit event occurs whenever a user issues a command
that causes a check to be made for the existence of the
appropriate privilege in an access privilege set (APS). To
monitor access to a particular database object or group of
objects, use the Daccess=object-type option to specify that a
DACCESS audit record be produced whenever an attempt is made
to access the object.
Specifying the general Daccess option enables or disables the
general DACCESS audit event type. If DACCESS event auditing is
enabled and started for specific objects, auditing takes place
immediately after you issue the RMU Set Audit command with
the Enable=Daccess qualifier. Auditing starts for any users
specified in the Identifier=(identifier-list) option who are
attached to the database when the command is issued.
o Daccess=object-type[=(object name)]/Privileges=(privilege-
list)
Allows you to audit access to database objects by users in the
Identifier=(identifier-list) option with the privileges you
specify.
A DACCESS type event record indicates the command issued, the
privilege used by the process issuing the command, and whether
the attempt to access the object was successful.
The object-type option enables or disables DACCESS auditing
for the specified object type. You can specify one or more
object types in an RMU Set Audit command. The three valid
object types are:
- DATABASE
When you specify the DATABASE object type, you must use the
Privileges qualifier to specify one or more privileges to
be audited for the database. Do not specify an object name
with the DATABASE object type.
- TABLE
Specify the TABLE option for both tables and views. When
you specify the TABLE object type, you must specify one or
more table names with the object name parameter. You must
also use the Privileges qualifier to specify one or more
privileges to be audited for the specified tables.
- COLUMN
When you specify the COLUMN object type, you must specify
one or more column names with the object name parameter.
Specify the table name that contains the column by using
the following syntax:
table-name.column-name
If you specify more than one column, separate the list
of table-name.column-names with commas, and enclose the
list within parentheses. You must also use the Privileges
qualifier to specify one or more privileges to be audited
for the specified columns.
The object name parameter enables or disables DACCESS auditing
for the specified object or objects. If you specify more than
one object name, separate the object names with commas, and
enclose the list of object names within parentheses.
If you specify one or more object names, you must select one
or more privileges to audit. Use the Privileges=privilege-list
qualifier to select the privileges that are to be audited for
each of the objects in the object name list when the selected
objects are accessed. The privileges that can be specified
with the Privileges qualifier are listed in DACCESS Privileges
for Database Objects.
Privilege names SUCCESS and FAILURE can be used as a
convenient way to specify that all successful or failed
accesses to that object for all privileges should be audited.
The privilege name All can be used with the Enable or Disable
qualifier to turn on or turn off auditing for all privileges
applicable to the object.
If you specify a privilege that does not apply to an object,
Oracle Rdb allows it, but will not produce any auditing for
that privilege. You can specify only SQL privileges with the
Privileges=(privilege-list) qualifier. The privileges that
can be specified for each Oracle Rdb object type are shown
in DACCESS Privileges for Database Objects. The Relational
Database Operator (RDO) privileges that correspond to
the SQL privileges are included in DACCESS Privileges for
Database Objects to help RDO users select the appropriate SQL
privileges for auditing.
Table 13 DACCESS Privileges for Database Objects
SQL RDO
Privilege Privilege Database Table/ViColumn
ALTER CHANGE Y Y N
CREATE DEFINE Y Y N
DBADM ADMINISTRATOR Y N N
DBCTRL CONTROL Y Y N
DELETE ERASE N Y N
DISTRIBTRAN DISTRIBTRAN Y N N
DROP DELETE Y Y N
INSERT WRITE N Y N
REFERENCES REFERENCES N Y Y
SECURITY SECURITY Y N N
SELECT READ Y Y N
UPDATE MODIFY N Y Y
SUCCESS SUCCESS Y Y Y
FAILURE FAILURE Y Y Y
ALL ALL Y Y Y
o Identifier=(identifier-list)
Enables or disables auditing of user access to objects listed
in the Enable=Daccess=object-type qualifier. If you do not
specify this option, no users are audited for the DACCESS
event. Any user whose identifier you specify is audited for
accessing the database objects with the privileges specified.
You can specify wildcard characters within the identifiers
to identify groups of users. The [*,*] identifier indicates
public, and causes all users to be audited. If you specify a
nonexistent identifier, you receive an error message.
The order of identifiers in the identifier list is not
significant. A user is audited if he or she holds any of the
identifiers specified in the identifier list.
You can specify user identification code (UIC) identifiers,
general identifiers, and system-defined identifiers in the
identifier list. For more information on identifiers, see the
Oracle Rdb Guide to Database Design and Definition.
If you specify more than one identifier, separate the
identifiers with commas, and enclose the identifier list
within parentheses. UIC identifiers with commas such as
[RDB,JONES] must be enclosed within quotation marks as
follows:
IDENTIFIER=(INTERACTIVE,"[RDB,JONES]",SECRETARIES)
When you use Identifier=(identifier-list) to specify one or
more identifiers to be audited, those identifiers are audited
whenever they access any object for which auditing has been
enabled.
o Protection
Allows you to audit changes made to access privilege sets
for database objects by means of the SQL GRANT and REVOKE
statements.
o Rmu
Audits the use of Oracle RMU commands by users with the
privilege to use them.
32.3.4.3 – Every
Noevery
Sets the granularity of DACCESS event auditing for the database.
When you specify the Every qualifier, every access check
for the specified objects using the specified privilege or
privileges during a database attachment is audited. When you
specify the Noevery qualifier, each user's first access check
for the specified audit objects using the specified privilege
or privileges during a database attachment is audited. The
First qualifier is a synonym for the Noevery qualifier; the two
qualifiers can be used interchangeably.
The default is the Every qualifier.
32.3.4.4 – First
Specifies that when DACCESS event auditing is enabled, each
user's first access check for the specified audit objects
using the specified privilege or privileges during a database
attachment is audited. The First qualifier is a synonym
for the Noevery qualifier; the two qualifiers can be used
interchangeably.
32.3.4.5 – Flush
Noflush
Indicates whether forced writes of audit journal records are
currently enabled for the database. Forced writes will cause
Oracle Rdb to write (flush) the audit journal record immediately
out to disk when the audit record is produced, rather than
waiting for the audit server to flush the audit records at
specified intervals of seconds.
The default is the Noflush qualifier, which flushes audit records
every interval of seconds. To specify the interval, use the DCL
command SET AUDIT/INTERVAL=JOURNAL_FLUSH=time.
32.3.4.6 – Start
Starts Oracle Rdb security auditing for the database. The Start
qualifier by itself starts both security alarms and security
audit journal records. Also, you can supply the Type=Alarm
qualifier or the Type=Audit qualifier to start security alarms
only or security audit journaling only.
When you specify the Start qualifier, auditing starts immediately
for all audit event classes that are currently enabled. Any
subsequent audit events of currently attached users are recorded
in the security audit journal, or alarms are sent to security-
enabled terminals, or both, depending on what you have specified
for your database.
32.3.4.7 – Stop
Stops Oracle Rdb security auditing for the database. The Stop
qualifier by itself stops both security alarms and security audit
journal records. Also, you can supply the Type=Alarm qualifier or
the Type=Audit qualifier to stop security alarms only or security
audit journaling only.
When you specify the Stop qualifier, the alarms or audits
(or both) of all audit event classes are immediately stopped
(depending on whether you specified the Type=Alarm qualifier,
the Type=Audit qualifier, or neither). The audit event classes
previously specified with the Enable qualifier remain enabled,
and you can start them again by using the Start qualifier.
32.3.4.8 – Type
Type=option
Specifies that security alarms or security audit journal records
(or both) be enabled or disabled. The following options are
available with the Type qualifier:
o Alarm
Causes subsequent qualifiers in the command line (Start, Stop,
Enable, and Disable) to generate or affect security alarm
messages that are sent to all terminals enabled as security
operator terminals.
o Audit
Causes subsequent qualifiers in the command line (Start,
Stop, Enable, and Disable) to generate or affect security
audit journal records that are recorded in the security audit
journal file.
If you do not specify the Type qualifier with the RMU Set
Audit command, Oracle RMU enables or disables both security
alarms and security audit journal records.
32.3.5 – Usage Notes
o To use the RMU Set Audit command for a database, you must
have the RMU$SECURITY privilege in the root file ACL for the
database or the OpenVMS SECURITY or BYPASS privilege.
o Audit journal records collected on a database can be stored
only in the database from which they were collected. The
database name specified with the RMU Load command with the
Audit qualifier identifies to Oracle Rdb both the audit
records to be loaded and the database into which they are
to be loaded.
o There is very little overhead associated with security
auditing; no extra disk I/O is involved. Therefore, you need
not be concerned about the impact to database performance
should you decide to enable security auditing.
o You can use the Daccess=object-type option to enable DACCESS
checking for specific objects, but the general DACCESS class
is not enabled until you explicitly enable it by using the
Enable=Daccess qualifier with the RMU Set Audit command.
Also, you need to use the Start qualifier with the RMU Set
Audit command to start the auditing and alarms that have been
enabled.
o Alarms are useful for real-time tracking of auditing
information. At the moment an alarm occurs, text messages
regarding the alarm are displayed on security-enabled
terminals.
To enable a terminal to receive Oracle Rdb security alarms,
enter the DCL REPLY/ENABLE=SECURITY command. You must have
both the OpenVMS SECURITY and OpenVMS OPER privileges to use
the REPLY/ENABLE=SECURITY command.
o Audit records are useful for periodic reviews of security
events. Audit records are stored in a security audit journal
file, and can be reviewed after they have been loaded into
a database table with the RMU Load command with the Audit
qualifier. Use the DCL SHOW AUDIT/JOURNAL command to determine
the security audit journal file being used by your database.
o The AUDIT class is always enabled for both alarms and audit
records, but does produce any alarms or audit records until
auditing is started. The AUDIT class cannot be disabled.
o When you specify the Daccess=object-type option and
one or more other options in an options list, the
Privileges=(privilege-list) qualifier must begin after the
closing parenthesis for the options list.
o To display the results of an RMU Set Audit command, enter the
RMU Show Audit command.
o You can use the Disable and Enable qualifiers with indirect
file references. See the Indirect-Command-Files help entry for
more information.
o When the RMU Set Audit command is issued for a closed
database, the command executes without other users being able
to attach to the database.
32.3.6 – Examples
Example 1
In the following example, the first command enables alarms
for the RMU and PROTECTION classes. The second command shows
that alarms for the RMU and PROTECTION classes are enabled but
not yet started. The AUDIT class is always enabled and cannot
be disabled. The third command starts alarms for the RMU and
PROTECTION classes. The fourth command shows that alarms for the
RMU and PROTECTION classes are enabled and started.
$ ! Enable alarms for RMU and PROTECTION classes:
$ RMU/SET AUDIT/TYPE=ALARM/ENABLE=(RMU,PROTECTION) MF_PERSONNEL
$ !
$ ! Show that alarms are enabled, but not yet started:
$ RMU/SHOW AUDIT/ALL MF_PERSONNEL
Security auditing STOPPED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (disabled)
Security alarms STOPPED for:
PROTECTION (enabled)
RMU (enabled)
AUDIT (enabled)
DACCESS (disabled)
Audit flush is disabled
Audit every access
Enabled identifiers:
None
$ ! Start alarms for the enabled RMU and PROTECTION classes:
$ RMU/SET AUDIT/START/TYPE=ALARM MF_PERSONNEL
$ !
$ ! Show that alarms are started for the RMU and PROTECTION classes:
$ RMU/SHOW AUDIT/ALL MF_PERSONNEL
Security auditing STOPPED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (disabled)
Security alarms STARTED for:
PROTECTION (enabled)
RMU (enabled)
AUDIT (enabled)
DACCESS (disabled)
Audit flush is disabled
Audit every access
Enabled identifiers:
None
Example 2
In this example, the first command shows that alarms are started
and enabled for the RMU class. The second command disables alarms
for the RMU class. The third command shows that alarms for RMU
class are disabled.
$ ! Show that alarms are enabled and started for the RMU class:
$ RMU/SHOW AUDIT/ALL MF_PERSONNEL
Security auditing STOPPED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (disabled)
Security alarms STARTED for:
PROTECTION (disabled)
RMU (enabled)
AUDIT (enabled)
DACCESS (disabled)
Audit flush is disabled
Audit every access
Enabled identifiers:
None
$ ! Disable alarms for the RMU class:
$ RMU/SET AUDIT/TYPE=ALARM/DISABLE=RMU MF_PERSONNEL
$ !
$ ! Show that alarms are disabled for the RMU class:
$ RMU/SHOW AUDIT/ALL MF_PERSONNEL
Security auditing STOPPED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (disabled)
Security alarms STARTED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (disabled)
Audit flush is disabled
Audit every access
Enabled identifiers:
None
Example 3
In this example, the first command enables auditing for users
with the [SQL,USER1] and [RDB,USER2] identifiers. The second
command shows the enabled identifiers. The third command enables
DACCESS checks requiring SELECT and INSERT privileges for the
EMPLOYEES and COLLEGES tables. The fourth command displays the
DACCESS checks that have been specified for the COLLEGES and
EMPLOYEES tables. Note that because the general DACCESS type has
not been enabled, DACCESS for the EMPLOYEES and COLLEGES tables
is displayed as disabled.
$ ! Enable auditing for users with the [SQL,USER1] and
$ ! [RDB,USER2] identifiers:
$ RMU/SET AUDIT/ENABLE=IDENTIFIER=("[SQL,USER1]","[RDB,USER2]") -
_$ MF_PERSONNEL
$ !
$ ! Show that [SQL,USER1] and [RDB,USER2] are enabled identifiers:
$ RMU/SHOW AUDIT/ALL MF_PERSONNEL
Security auditing STOPPED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (disabled)
Security alarms STOPPED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (disabled)
Audit flush is disabled
Audit every access
Enabled identifiers:
(IDENTIFIER=[SQL,USER1])
(IDENTIFIER=[RDB,USER2])
$ ! Enable and start DACCESS checks for the SELECT and INSERT
$ ! privileges for the COLLEGES and EMPLOYEES tables:
$ RMU/SET AUDIT/ENABLE=DACCESS=TABLE=(COLLEGES,EMPLOYEES) -
_$ /PRIVILEGES=(SELECT,INSERT)/START MF_PERSONNEL
$ !
$ ! Display the DACCESS checks that are enabled and
$ ! started for the COLLEGES and EMPLOYEES tables:
$ RMU/SHOW AUDIT/DACCESS=TABLE MF_PERSONNEL
Security auditing STARTED for:
DACCESS (disabled)
TABLE : EMPLOYEES
(SELECT,INSERT)
TABLE : COLLEGES
(SELECT,INSERT)
Security alarms STARTED for:
DACCESS (disabled)
TABLE : EMPLOYEES
(SELECT,INSERT)
TABLE : COLLEGES
(SELECT,INSERT)
Example 4
In this example, the first command enables auditing of the JOBS
and EMPLOYEES tables for DACCESS checks for users with the
[SQL,USER1] or BATCH identifier. The Privileges=All qualifier
specifies that auditing will be produced for every privilege.
The second command shows that auditing is enabled for users
with the [SQL,USER1] or BATCH identifier. The third command
shows that DACCESS checking for the JOBS and EMPLOYEES tables
for all privileges is specified. The fourth command enables the
general DACCESS class. The fifth command's output shows that the
general DACCESS class is now enabled. The sixth command starts
the auditing that is enabled, and the seventh command shows that
the enabled auditing is started.
$ ! Enable DACCESS checks for users with the [SQL,USER1] or
$ ! BATCH identifier for the JOBS and EMPLOYEES tables:
$ RMU/SET AUDIT/TYPE=AUDIT -
_$ /ENABLE=(IDENTIFIER=("[SQL,USER1]",BATCH), -
_$ DACCESS=TABLE=(JOBS,EMPLOYEES)) /PRIVILEGES=ALL MF_PERSONNEL
$ !
$ ! Show that auditing is enabled for users with the [SQL,USER1]
$ ! or BATCH identifiers:
$ RMU/SHOW AUDIT/ALL MF_PERSONNEL
Security auditing STOPPED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (disabled)
Security alarms STOPPED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (disabled)
Audit flush is disabled
Audit every access
Enabled identifiers:
(IDENTIFIER=[SQL,USER1])
(IDENTIFIER=BATCH)
$ ! Show that DACCESS checking for all privileges for the
$ ! JOBS and EMPLOYEES tables is enabled:
$ RMU/SHOW AUDIT/DACCESS=TABLE MF_PERSONNEL
Security auditing STOPPED for:
DACCESS (disabled)
TABLE : EMPLOYEES
(ALL)
TABLE : JOBS
(ALL)
Security alarms STOPPED for:
DACCESS (disabled)
$ ! Enable the general DACCESS class:
$ RMU/SET AUDIT/ENABLE=DACCESS MF_PERSONNEL
$ !
$ ! Show that the general DACCESS class is enabled:
$ RMU/SHOW AUDIT/DACCESS=TABLE MF_PERSONNEL
Security auditing STOPPED for:
DACCESS (enabled)
TABLE : EMPLOYEES
(ALL)
TABLE : JOBS
(ALL)
Security alarms STOPPED for:
DACCESS (enabled)
$ ! Start the auditing that is enabled:
$ RMU/SET AUDIT/START MF_PERSONNEL
$ !
$ ! Show that the enabled auditing is started:
$ RMU/SHOW AUDIT/ALL MF_PERSONNEL
Security auditing STARTED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (enabled)
Security alarms STARTED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
DACCESS (enabled)
Audit flush is disabled
Audit every access
Enabled identifiers:
(IDENTIFIER=[SQL,USER1])
(IDENTIFIER=BATCH)
Example 5
In this example, the first command enables DACCESS checks
requiring the INSERT privilege for the mf_personnel database,
for the EMPLOYEES table, and for the EMPLOYEE_ID column of the
EMPLOYEES table. The second command shows that the DACCESS check
for the INSERT privilege is enabled for the specified objects.
$ ! Enable a DACCESS check for the INSERT privilege for the
$ ! MF_PERSONNEL database, EMPLOYEES table, and EMPLOYEE_ID
$ ! column of the EMPLOYEES table:
$ RMU/SET AUDIT -
_$ /ENABLE=DACCESS=(DATABASE,TABLE=EMPLOYEES, -
_$ COLUMN=EMPLOYEES.EMPLOYEE_ID) -
_$ /PRIVILEGES=(INSERT) MF_PERSONNEL
$ !
$ ! Show that the DACCESS check for the INSERT privilege is
$ ! enabled for the specified objects. (The general DACCESS
$ ! class remains disabled until you issue an
$ ! RMU/SET AUDIT/ENABLE=Daccess command without specifying
$ ! any object-type parameter to the Daccess option.
$ ! See the fourth Oracle RMU command in Example 4.)
$ !
$ RMU/SHOW AUDIT/DACCESS=(DATABASE,TABLE,COLUMN) MF_PERSONNEL
Security auditing STOPPED for:
DACCESS (disabled)
DATABASE
(INSERT)
TABLE : EMPLOYEES
(INSERT)
COLUMN : EMPLOYEES.EMPLOYEE_ID
(INSERT)
Security alarms STOPPED for:
DACCESS (disabled)
DATABASE
(INSERT)
TABLE : EMPLOYEES
(INSERT)
COLUMN : EMPLOYEES.EMPLOYEE_ID
(INSERT)
Example 6
In this example, the first command enables a DACCESS check
requiring the INSERT privilege for the EMPLOYEES and COLLEGES
tables, as well as for the EMPLOYEE_ID and LAST_NAME columns of
the EMPLOYEES table and the COLLEGE_CODE column of the COLLEGES
table in the mf_personnel database. The second command shows that
the DACCESS check for the INSERT privilege is enabled for the
specified objects.
$ ! Enable a DACCESS check for the INSERT privilege for the
$ ! EMPLOYEES and COLLEGES table, the LAST_NAME and EMPLOYEE_ID
$ ! column of the EMPLOYEES table, and the COLLEGE_CODE column
$ ! of the COLLEGES table:
$ RMU/SET AUDIT -
_$ /ENABLE=DACCESS=(TABLE=(EMPLOYEES,COLLEGES), -
_$ COLUMN=(EMPLOYEES.EMPLOYEE_ID, -
_$ EMPLOYEES.LAST_NAME, -
_$ COLLEGES.COLLEGE_CODE)) -
_$ /PRIVILEGES=(INSERT) MF_PERSONNEL
$ !
$ ! Show that the DACCESS check for the INSERT privilege is
$ ! enabled for the specified objects. (The general DACCESS
$ ! class remains disabled until you issue an
$ ! RMU/SET AUDIT/ENABLE=Daccess command without specifying
$ ! any object-type parameter to the Daccess option.
$ ! See the fourth Oracle RMU command in Example 4.)
$ !
$ RMU/SHOW AUDIT/DACCESS=(DATABASE,TABLE,COLUMN) MF_PERSONNEL
Security auditing STOPPED for:
DACCESS (disabled)
DATABASE
(NONE)
TABLE : COLLEGES
(INSERT)
TABLE : EMPLOYEES
(INSERT)
COLUMN : COLLEGES.COLLEGE_CODE
(INSERT)
COLUMN : EMPLOYEES.EMPLOYEE_ID
(INSERT)
COLUMN : EMPLOYEES.LAST_NAME
(INSERT)
Security alarms STOPPED for:
DACCESS (disabled)
DATABASE
(NONE)
TABLE : COLLEGES
(INSERT)
TABLE : EMPLOYEES
(INSERT)
COLUMN : COLLEGES.COLLEGE_CODE
(INSERT)
COLUMN : EMPLOYEES.EMPLOYEE_ID
(INSERT)
COLUMN : EMPLOYEES.LAST_NAME
(INSERT)
32.4 – Buffer Object
On a database basis, controls which database objects use the
OpenVMS Fast I/O and Buffer Objects features.
32.4.1 – Description
Use the RMU Set Buffer_Object command to control, on a database
basis, which database objects use the OpenVMS Fast I/O and Buffer
Objects features.
32.4.2 – Format
(B)0[mRMU/Set Buffer_Object root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Disable=enable-disable-options x See description
/Enable=enable-disable-options x See description
/[No]Log x Current DCL verify value
32.4.3 – Parameters
32.4.3.1 – root-file-spec
The root file specification of the database. The default file
extension is .rdb.
32.4.4 – Command Qualifiers
32.4.4.1 – Disable
Disable=enable-disable-options
Disables buffer objects for the specified Oracle Rdb buffers.
You can specify one or more of the following buffer objects:
Page, AIJ, RUJ, and Root. Refer to Buffer Object Control for more
information about these keywords. If you specify more than one
object, separate the objects with commas, and enclose the list of
objects within parentheses.
32.4.4.2 – Enable
Enable=enable-disable-options
Enables buffer objects for the specified Oracle Rdb buffers.
You can specify one or more of the following buffer objects:
Page, AIJ, RUJ, and Root. Refer to Buffer Object Control for more
information about these keywords. If you specify more than one
object, separate the objects with commas, and enclose the list of
objects within parentheses.
If you specify the Enable and Disable qualifiers for the same
buffer object, the Enable option prevails and the buffer object
state is enabled for the specified object type.
Table 14 Buffer Object Control
Object Keyword Logical Name
Data PAGE RDM$BIND_PAGE_BUFOBJ_ENABLED
pages
AIJ AIJ RDM$BIND_AIJ_BUFOBJ_ENABLED
output
RUJ RUJ RDM$BIND_RUJ_BUFOBJ_ENABLED
Root ROOT RDM$BIND_ROOT_BUFOBJ_ENABLED
file
NOTE
If a logical is defined as "1" then the corresponding buffer
will be created as an OpenVMS buffer object.
32.4.4.3 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request log output and
the Nolog qualifier to prevent it. If you specify neither, the
default is the current setting of the DCL verify switch.
32.4.5 – Usage Notes
o The Enable and Disable qualifiers are mutually exclusive.
o The RMU Set Buffer_Object command requires exclusive database
access; that is, the database cannot be open or be accessed by
other users.
o Buffer objects are memory resident and thus reduce the amount
of physical memory available to OpenVMS for other uses. Buffer
object use requires that the user be granted the VMS$BUFFER_
OBJECT_USER rights identifier. The system parameter MAXBOBMEM
needs to be large enough to allow all buffer objects for all
users to be created. For further information regarding Fast
I/O, consult the OpenVMS documentation.
32.4.6 – Example
The following example demonstrates enabling ROOT buffer objects
and disabling PAGE buffer objects. The RMU /DUMP /HEADER command
is used to validate the change.
$RMU /SET BUFFER_OBJECT /ENABLE=(ROOT) /DISABLE=(PAGE) MF_PERSONNEL
%RMU-I-MODIFIED, Buffer objects state modified
%RMU-W-DOFULLBCK, full database backup should be done to ensure futur
$ RMU/DUMP/HEAD MF_PERSONNEL
.
.
.
- OpenVMS Alpha Buffer Objects are enabled for
Root I/O Buffers
32.5 – Corrupt Pages
Allows you to set pages, storage areas, and snapshot files
as either corrupt or consistent in the corrupt page table
(CPT). A corrupt page is one that contains meaningless data; an
inconsistent page is one that contains old data (data that is not
at the same transaction level as the database root file). Corrupt
pages are logged to the CPT, which is maintained in the database
root file. When the CPT becomes full (due to a large number of
pages being logged), the area containing the most corrupt pages
is marked as corrupt and the individual corrupt pages for that
area are removed from the corrupt page table. The Oracle RMU Set
Corrupt_Pages operation is an offline operation.
If you reset a page or storage area in the CPT to consistent it
does not remove any true corruption or inconsistencies. However,
if you reset a snapshot file in the CPT to consistent, Oracle
RMU initializes the snapshot file and thus removes any true
corruption or inconsistency.
CAUTION
Use the RMU Set Corrupt_Pages command only after you
fully understand the internal data structure and know the
information the database should contain. Setting a page
in a storage area that is truly corrupt or inconsistent to
consistent does not remove the corruption or inconsistency.
Setting truly corrupt or inconsistent pages in a storage
area to consistent and continuing to access those pages can
result in unrecoverable corruptions to the database.
The RMU Restore and RMU Recover commands should be used
first and should be part of your normal operating procedure.
NOTE
This command replaces two RdbALTER statements: MAKE
CONSISTENT and UNCORRUPT. Both the RdbAlter statements,
MAKE CONSISTENT and UNCORRUPT, are deprecated commands that
may be removed in future versions.
When a storage area is restored from backup files on a by-area
basis, it does not reflect data that has been updated since
the backup operation. The transaction level of the restored
area reflects the transaction level of the backup file, not the
transaction level of the database. Therefore, the transaction
level of the restored area differs from that of the database.
Oracle Rdb marks the area by setting a flag in the storage area
file to inconsistent.
You can perform a recovery by area to upgrade the transaction
level of the restored area to that of the database. (After-
image journaling must be enabled in order to restore by area.)
If you are certain that no updates have been made to the database
since the backup operation, you can use the RMU Set Corrupt_Pages
command to change the setting of the flag from inconsistent to
consistent.
In addition, storage areas are corrupted by attempting an SQL
rollback with one or more storage areas opened in batch-update
transaction mode.
The RMU Set Corrupt_Pages command allows you to access a database
that is in an uncertain condition. Accordingly, the following
message and question are displayed when you enter it to correct a
corrupt or inconsistent storage area or storage area page. (This
message is not displayed if you enter it to correct a corrupt or
inconsistent snapshot file.)
***** WARNING! *****
Marking a storage area or page consistent does not
remove the inconsistencies. Remove any inconsistencies
or corruptions before you proceed with this action.
Do you wish to continue? [N]
32.5.1 – Description
The RMU Set Corrupt_Pages command allows you to override the
required RMU Recover command after a by-area restore operation.
Although Oracle RMU cannot determine when the recover operation
is superfluous, you might have that knowledge. If you are certain
of this knowledge, you can abridge the requirement for the
recover operation by using the RMU Set Corrupt_Pages command
to set corrupt pages to consistent.
Similarly, sometimes you might know of a problem that Oracle
RMU does not recognize. For example, you might find that a page
contains an index node that causes a bugcheck dump each time it
is accessed. You can use the RMU Set Corrupt_Pages command to
mark this page as corrupt and then follow your usual procedure
for recovering from database corruption.
Note that the RMU Set Corrupt_Pages command with the Consistent
qualifier does not make truly corrupt storage area pages usable.
Corrupt storage area pages detected during normal operation are
logged in the CPT, and likely have an invalid checksum value.
The RMU Set Corrupt_Pages command with the Consistent qualifier
removes the specified pages from the CPT, but the next time a
user tries to touch that storage area page, it is logged in the
CPT again because it is still physically corrupt. To correct a
storage area page that is truly corrupt, you must restore it from
a backup file.
The RMU Set Corrupt_Pages command with the Consistent qualifier
does make truly corrupt or inconsistent pages in a snapshot file
usable. When you use this command and specify a snapshot file
with the areas qualifier, Oracle RMU initializes the specified
snapshot file.
32.5.2 – Format
(B)0[mRMU/Set Corrupt_Pages root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Area=identity x None
/Consistent x None
/Corrupt x None
/Disk=device x None
/Page=(n,...) x None
32.5.3 – Parameters
32.5.3.1 – root-file-spec
The file specification of the database root file for which you
want to set pages or areas to corrupt or consistent.
32.5.4 – Command Qualifiers
32.5.4.1 – Area
Area=identity
Specifies a particular storage area file or snapshot file. The
identity for a storage area can be either the area name (for
example, EMPIDS_OVER), or a storage area ID number (for example,
5). The identity for a snapshot file must be the snapshot file
ID number. Use the RMU Dump command with the Header qualifier to
display the ID numbers associated with a storage area file or a
snapshot file.
When you use the Area qualifier with the Page=(n,...) qualifier,
the command specifies the named pages in the named storage area
or snapshot file. When you specify the Area qualifier without the
Page qualifier, the command specifies all pages of the specified
storage area or snapshot file.
The Area qualifier cannot be used with the Disk qualifier.
32.5.4.2 – Consistent
Consistent
Specifies that the pages, areas, or snapshot files specified with
the Page, Area, or Disk qualifier are to be considered consistent
with the remainder of the database.
If you make a storage area or page in a storage area consistent
while it is marked in the database as not corrupt, but
inconsistent, you receive a warning and are required to confirm
your request to carry out this operation before the operation
will complete.
You cannot use the Consistent qualifier with the Corrupt
qualifier.
32.5.4.3 – Corrupt
Corrupt
Specifies that the pages, areas, or snapshot files specified with
the Page, Area, or Disk qualifier are to be considered corrupt.
You cannot use the Corrupt qualifier with the Consistent
qualifier.
32.5.4.4 – Disk
Disk=device
Specifies all the pages, all the storage areas, and all the
snapshot files on the named device be set as you indicate with
the Corrupt or the Consistent qualifier.
You cannot use the Disk qualifier with the Page or the Area
qualifier.
32.5.4.5 – Page
Page=(n,...)
Specifies the listed page numbers.
You must specify the Area qualifier when you use the Page
qualifier.
You cannot use the Page qualifier with the Disk qualifier.
32.5.5 – Usage Notes
o You must have the RMU$ALTER, RMU$BACKUP, or RMU$RESTORE
privilege in the root file access control list (ACL) for a
database or the OpenVMS SYSPRV or BYPASS privilege to use the
RMU Set Corrupt_Pages command for the database.
o You can issue the RMU Set Corrupt_Pages command while users
are attached to the database.
o You must specify either the Corrupt or the Consistent
qualifier (but not both) when you use the RMU Set Corrupt_
Pages command.
o When you use the RMU Set Corrupt_Pages command to mark a page
as corrupt or consistent, the database is marked as having
been altered.
32.5.6 – Examples
Example 1
The following command sets storage area EMPIDS_MID in the mf_
personnel database as corrupt:
$ RMU/SET CORRUPT_PAGES/AREA=EMPIDS_MID/CORRUPT MF_PERSONNEL
%RMU-I-AREAMARKED, Area 4 was marked corrupt.
Example 2
The following command marks EMPIDS_MID as consistent. This is the
area that was marked as corrupt in Example 1. However, in this
case, instead of using the storage area name in the Oracle RMU
command, the storage area identifier is used.
$ RMU/SET CORRUPT_PAGES/AREA=4/CONSISTENT MF_PERSONNEL
***** WARNING! *****
Marking a storage area or page consistent does not
remove the inconsistencies. Remove any inconsistencies
or corruptions before you proceed with this action.
Do you wish to continue? [N] Y
%RMU-I-AREAMARKED, Area 4 was marked consistent .
Example 3
The following command marks page 1 in area 3 in the mf_personnel
database as corrupt. Using the RMU Show Corrupt_Pages command
confirms that the page has been marked as expected.
$ RMU/SET CORRUPT_PAGES/AREA=3/PAGE=1/CORRUPT MF_PERSONNEL
%RMU-I-PAGEMARKED, Page 1 in area 3 was marked corrupt.
$ RMU/SHOW CORRUPT_PAGES MF_PERSONNEL.RDB
*--------------------------------------------------------------------
* Oracle Rdb V7.0-00 3-JUL-1996 17:01:20.62
*
* Dump of Corrupt Page Table
* Database: USER1:[DB]MF_PERSONNEL.RDB;1
*
*--------------------------------------------------------------------
Entries for storage area EMPIDS_LOW
-----------------------------------
Page 1
- AIJ recovery sequence number is -1
- Live area ID number is 3
- Consistency transaction sequence number is 0:0
- State of page is: corrupt
*--------------------------------------------------------------------
* Oracle Rdb V7.0-00 3-JUL-1996 17:01:20.82
*
* Dump of Storage Area State Information
* Database: USER1:[DB]MF_PERSONNEL.RDB;1
*
*--------------------------------------------------------------------
All storage areas are consistent.
Example 4
The following example sets page 4 of the snapshot file for
EMPIDS_OVER to consistent. Because Oracle RMU initializes
snapshot files specified with the Set Corrupt_Pages command,
the snapshot file is removed from the corrupt page table and is
now usable.
$ RMU/SET CORRUPT_PAGES MF_PERSONNEL.RDB/AREA=14/PAGE=3/CONSISTENT
%RMU-I-PAGEMARKED, Page 3 in area 14 was marked consistent.
32.6 – Database
Allows you to alter the database-allowed transaction modes
without marking the database as modified.
32.6.1 – Description
The RMU /SET command "DATABASE /TRANSACTION_MODE=(...)" allows
altering of the database-allowed transaction modes without
marking the database as modified. This command is intended to be
used to set the transaction modes allowed on a standby database.
This command requires exclusive database access (the database
cannot be open or be accessed by other users).
Because only read-only transactions are allowed on a standby
database, you may wish to use the TRANSACTION_MODE=READ_ONLY
qualifier setting on a standby database. This setting prevents
modifications to the standby database at all times, even when
replication operations are not active.
The RMU /SET DATABASE command requires a database specification.
Valid keywords for the RMU /SET DATABASE /TRANSACTION_MODE=(...)
qualifier are:
o ALL - Enables all transaction modes
o CURRENT - Enables all transaction modes that are set in the
database
o NONE - Disables all transaction modes
o [NO]BATCH_UPDATE
o [NO]READ_ONLY
o [NO]EXCLUSIVE
o [NO]EXCLUSIVE_READ
o [NO]EXCLUSIVE_WRITE
o [NO]PROTECTED
o [NO]PROTECTED_READ
o [NO]PROTECTED_WRITE
o [NO]READ_WRITE
o [NO]SHARED
o [NO]SHARED_READ
o [NO]SHARED_WRITE
If you specify more than one transaction mode in the mode-list,
enclose the list in parenthesis and separate the transaction
modes from one another with a comma. Note the following:
o When you specify a negated transaction mode, it indicates
that a mode is not an allowable access mode. For example, if
you specify the Noexclusive_Write access mode, it indicates
that exclusive write is not an allowable access mode for the
restored database.
o If you specify the Shared, Exclusive, or Protected transaction
mode, Oracle RMU assumes you are referring to both reading and
writing in that transaction mode.
o No mode is enabled unless you add that mode to the list or you
use the All option to enable all transaction modes.
o You can list one transaction mode that enables or disables a
particular mode followed by another that does the opposite.
For example, /TRANSACTION_MODE=(NOSHARED_WRITE, SHARED) is
ambiguous because the first value disables Shared_Write access
and the second value enables Shared_Write access. Oracle
RMU resolves the ambiguity by first enabling the modes as
specified in the modes-list and then disabling the modes as
specified in the modes-list. The order of items in the list is
irrelevant. In the example presented previously, Shared_Read
is enabled and Shared_Write is disabled.
32.6.2 – Format
(B)0[m RMU/Set Database /Transaction_Mode=[mode] root-file-spec
32.6.3 – Parameters
32.6.3.1 – root-file-spec
Specifies the database root file for which you want to specify
the database transaction mode.
32.7 – Galaxy
Allows you to enable or disable the database utilization of an
OpenVMS Galaxy configuration without requiring that the database
be open.
32.7.1 – Description
Use this command to enable or disable Galaxy features on an
Oracle Rdb database. Databases opened on multiple copies of the
OpenVMS operating system within a Galaxy system can share, in
memory, database structures including global buffers, row caches,
and root file objects.
32.7.2 – Format
(B)0[mRMU/Set Galaxy root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Disable x See description
/Enable x See description
/[No]Log x Current DCL verify value
32.7.3 – Parameters
32.7.3.1 – root-file-spec
The root file specification of the database. The default file
extension is .rdb.
32.7.4 – Command Qualifiers
32.7.4.1 – Disable
Specifies that Galaxy features are to be disabled for the
database.
32.7.4.2 – Enable
Specifies that Galaxy features are to be enabled for the
database.
32.7.4.3 – Log
Log
Nolog
Displays a log message at the completion of the RMU Set Galaxy
operation.
32.7.5 – Usage Notes
o The Enable and Disable qualifiers are mutually exclusive.
o The RMU Set Galaxy command requires exclusive database access;
that is, the database cannot be open or be accessed by other
users.
32.7.6 – Example
The following example enables the Galaxy features for the
specified database.
$ RMU /SET GALAXY /ENABLE root-file-spec
32.8 – Global Buffers
Allows you to control the database global buffers feature without
requiring that the database be open.
32.8.1 – Description
If you move a database from one system to another, or when memory
usage or system parameters change, you may have to modify the
global buffer parameters for a database when it is not possible
to open the database. This situation could occur, for example, if
you have insufficient available physical or virtual memory.
The RMU Set Global_Buffers command allows you to alter some
of the global buffer-related parameters without opening the
database. This allows you to reconfigure the database so that
it can be opened and accessed on the system.
32.8.2 – Format
(B)0[mRMU/Set Global_Buffers root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Disabled x None
/Enabled x None
/Large_Memory={Enabled|Disabled} x None
/Log x None
/Number=n x None
/User_Limit=n x None
32.8.3 – Parameters
32.8.3.1 – root-file-spec
Specifies the database root file for which you want to modify the
global buffers feature.
32.8.4 – Command Qualifiers
32.8.4.1 – Disabled
Disables global buffers for the specified database.
32.8.4.2 – Enabled
Enables global buffers for the specified database.
32.8.4.3 – Large Memory
Large_Memory=Enabled
Large_Memory=Disabled
Large_Memory=Enabled enables global buffers in large memory
(VLM).
Large_Memory=Disabled disables global buffers in large memory
(VLM).
32.8.4.4 – Log
Displays a log message at the completion of the RMU Set Global_
Buffers command.
32.8.4.5 – Number
Number=n
Sets the number of global buffers.
32.8.4.6 – User Limit
User_Limit=n
Sets the global buffers user limit value.
32.8.5 – Usage Notes
o This command requires exclusive database access (the database
cannot be open or accessed by other users).
o The Enabled and Disabled qualifiers are mutually exclusive.
o The Large_Memory=Enabled and Large_Memory=Disabled qualifiers
are mutually exclusive.
o Changes made by the RMU Set Global_Buffers command are not
journaled. You should make a subsequent full database backup
to ensure recovery.
o When global buffers are set to reside in large memory
(Large_Memory=Enabled), the process that opens the database
must be granted the VMS$MEM_RESIDENT_USER rights identifier.
Oracle Corporation recommends that you use the RMU Open
command when you utilize this feature.
32.9 – Logminer
Allows you to change the LogMiner state of a database.
32.9.1 – Description
Use this command to enable or disable LogMiner operations on an
Oracle Rdb database. When LogMiner is enabled, the Oracle Rdb
database software writes additional information to the after-
image journal file when records are added, modified, and deleted
from the database. This information is used during the unload
operation.
32.9.2 – Format
(B)0[m RMU/Set Logminer root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Continuous x /NoContinuous
/Disable x See description
/Enable x See description
/[No]Log x Current DCL verify value
32.9.3 – Parameters
32.9.3.1 – root-file-spec
The root file specification of the database. The default file
extension is .rdb.
32.9.4 – Command Qualifiers
32.9.4.1 – Continuous
Continuous
Nocontinuous
Enables the database for the Continuous LogMiner feature
when used in conjunction with the Enable qualifier. Use the
NoContinuous qualifier with the Enable qualifier to disable use
of the Continuous LogMiner feature.
The RMU Set Logminer /Disable command explicitly disables
the Continuous LogMiner feature as well as the base LogMiner
functionality. To enable the Continuous LogMiner feature again,
the entire RMU Set Logminer /Enable /Continuous command must be
used.
32.9.4.2 – Disable
Specifies that LogMiner operations are to be disabled for the
database. When LogMiner is disabled, the Oracle Rdb software does
not journal information required for LogMiner operations. When
LogMiner is disabled for a database, the RMU Unload After_Journal
command is not functional on that database.
32.9.4.3 – Enable
Specifies that LogMiner operations are to be enabled for the
database. When LogMiner is enabled, the Oracle Rdb database
software logs additional information to the after-image journal
file. This information allows LogMiner to extract records. The
database must already have after-image journaling enabled.
32.9.4.4 – Log
Log
Nolog
Specifies that the setting of the LogMiner state for the database
be reported to SYS$OUTPUT. The default is the setting of the DCL
VERIFY flag, which is controlled by the DCL SET VERIFY command.
32.9.5 – Usage Notes
o To use the RMU Set Logminer command, you must have the
RMU$BACKUP, RMU$RESTORE, or RMU$ALTER privilege in the root
file access control list (ACL) for the database or the OpenVMS
SYSPRV or BYPASS privilege.
o The RMU Set Logminer command requires offline access to the
database. The database must be closed and no other users may
be accessing the database.
o Execute a full database backup operation after issuing an
RMU Set Logminer command that displays the RMU-W-DOFULLBCK
warning message. Immediately after enabling LogMiner, you
should perform a database after-image journal backup using the
RMU Backup After_Journal command.
32.9.6 – Examples
Example 1
The following example enables a database for LogMiner for Rdb
operation.
$ RMU /SET LOGMINER /ENABLE OLTPDB.RDB
32.10 – Privilege
Allows you to modify the root file access control list (ACL) for
a database.
A database's root file ACL determines the Oracle RMU commands
that users can execute for the associated database.
32.10.1 – Description
The RMU Set Privilege command allows you to manipulate an entire
root file ACL, or to create, modify, or delete access control
entries (ACEs) in a root file ACL. See the Oracle Rdb Guide to
Database Design and Definition for introductory information on
ACEs and ACLs.
Use the RMU Set Privilege command to add ACEs to a root file ACL
by specifying the ACEs with the Acl qualifier.
Privileges Required for Oracle RMU Commands shows the privileges
a user must have to access each Oracle RMU command.
If the database root file you specify with RMU Set Privilege
command does not have an ACL, Oracle RMU creates one.
The RMU Set Privilege command provides the following qualifiers
to manipulate ACEs and ACLs in various ways:
After
Delete
Like
New
Replace
By default, any ACEs you add to a root file ACL are placed at
the top of the ACL. Whenever Oracle RMU receives a request
for Oracle RMU access for a database that has a root file ACL,
it searches each entry in the ACL from the first to the last
for the first match it can find, and then stops searching. If
another match occurs further down in the root file ACL, it has no
effect. Because the position of an ACE in a root file ACL is so
important, you can use the After qualifier to correctly position
an ACE. When you use the After qualifier, any additional ACEs are
added after the specified ACE.
You can delete ACEs from an ACL by including the Delete qualifier
and specifying the ACEs with the Acl qualifier. To delete all the
ACEs, include the Delete qualifier and specify the Acl qualifier
without specifying any ACEs.
You can copy an ACL from one root file to another by using the
Like qualifier. The ACL of the root file specified with the Like
qualifier replaces the ACL of the root file specified with the
root-file-spec parameter.
Use the New qualifier to delete all ACEs before adding any ACEs
specified by the Acl, Like, or Replace qualifiers.
You can replace existing ACEs in a root file ACL by using the
Replace qualifier. Any ACEs specified with the Acl qualifier
are deleted and replaced by those specified with the Replace
qualifier.
The existing ACE can be abbreviated when you use the Delete,
Replace, or After qualifiers.
Use the RMU Set Privilege command with the Edit qualifier to
invoke the ACL editor. You can specify the following qualifiers
only when you specify the Edit qualifier also:
Journal
Keep
Mode
Recover
For more information on the ACL editor, see the OpenVMS
documentation set.
32.10.2 – Format
(B)0[mRMU/Set Privilege root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Acl[=(ace[,...])] x See description
/Acl_File=filename x See description
/After=ace x See description
/Delete[=All] x See description
/Edit x No editor invoked
/[No]Journal[=file-spec] x /Journal
/Keep[=(Recovery_Journal)] x See description
/Like=source-root-file-spec x None
/[No]Log x /Nolog
/Mode=[No]Prompt x /Mode=Prompt
/New x None
/[No]Recover[=file-spec] x /Norecover
/Replace=(ace[,...]) x None
32.10.3 – Parameters
32.10.3.1 – root-file-spec
The root file for the database whose root file ACL you are
modifying.
32.10.4 – Command Qualifiers
32.10.4.1 – Acl
Acl[=(ace[,...])]
Specifies one or more ACEs to be modified. When no ACE is
specified, the entire ACL is affected. Separate multiple ACEs
with commas. When you include the Acl qualifier, the specified
ACEs are inserted at the top of the ACL unless you also specify
the After qualifier. You cannot specify the Acl qualifier and the
Acl_File qualifier on the same RMU command line.
The format of an ACE is as follows:
(Identifier=user-id, Access=access_mask)
The user-id must be one of the following types of identifier:
o A user identification code (UIC) in [group-name,member-name]
alphanumeric format
o A user identification code (UIC) in [group-number,member-
number] numeric format
o A general identifier, such as SECRETARIES
o A system-defined identifier, such as DIALUP
o Wildcard characters in [*,*] format
Names are not case sensitive. In addition, the Identifier and
Access keywords can be abbreviated to one character. For example,
the following ACE is valid:
(I=isteward, A=RMU$ALL)
The access_mask can be any of the following:
o One or more of the Oracle RMU privileges listed in the Oracle
Rdb7 Oracle RMU Reference Manual
If more than one privilege is specified, a plus sign (+) must
be placed between the privileges.
o The keyword RMU$ALL
These keywords indicate that you want the user to have all of
the RMU privileges. (This keyword has no effect on system file
privileges.)
o The keyword None
This keyword indicates that you do not want the user
to have any RMU or OpenVMS privileges. If you specify
Acl=(id=username, access=READ+NONE), the specified user will
have no RMU privileges and no READ privileges for the database
files.
32.10.4.2 – Acl File
Acl_File=filename
Specifies a file containing a list of ACEs, with one ACE
specified per line. You can use continuation characters to
continue an ACE on the next line, and you can include commented
lines within the file. Within this file, use the dash (-) as a
continuation character and the exclamation point (!) to indicate
a comment.
You cannot specify the Acl_File qualifier and the Acl qualifier
on the same RMU command line.
32.10.4.3 – After
After=ace
Indicates that all ACEs specified with the Acl qualifier are to
be added after the ACE specified with the After qualifier. By
default, any ACEs added to the ACL are always placed at the top
of the list.
You cannot use this qualifier with the Edit qualifier.
32.10.4.4 – Delete
Delete[=All]
Indicates that the ACEs specified with the Acl qualifier are
to be deleted. If no ACEs are specified with the Acl qualifier,
the entire ACL is deleted. If you specify an ACE that was not
specified with the Acl qualifier, you are notified that the ACE
does not exist, and the delete operation continues.
You cannot use this qualifier with the Edit qualifier.
32.10.4.5 – Edit
Edit
Invokes the ACL editor and allows you to use the Journal,
Keep, Mode, or Recover qualifiers. Oracle RMU ignores any other
qualifiers you specify with the Edit qualifier.
The RMU Set Privilege command with the Edit qualifier only
functions off line. If you attempt it on line, an error message
is generated. This restriction is necessary because the ACL
editor requests exclusive write access to the database.
To use the Edit qualifier, the SYS$SHARE:ACLEDTSHR.EXE image
must be installed at system startup time, or, be installed by
RMONSTART.COM. Contact your system manager if this image is not
installed as needed.
For more information on the ACL editor, see the OpenVMS
documentation set.
32.10.4.6 – Journal
Journal[=file-spec]
Nojournal
Controls whether a journal file is created from the editing
session. By default, a journal file is created if the editing
session ends abnormally.
If you omit the file specification, the journal file has the
same name as the root file and a file type of .tjl. You can use
the Journal qualifier to specify a journal file name that is
different from the default. No wildcard characters are allowed in
the Journal qualifier file-spec parameter.
You must specify the Edit qualifier to use this qualifier.
32.10.4.7 – Keep
Keep[=(Recovery,Journal)]
Determines whether the journal file, the recovery file, or both,
are deleted when the editing session ends. The options are:
o Recovery-Saves the journal file used for restoring the ACL.
o Journal-Saves the journal file for the current editing
session.
You can shorten the Journal and Recover options to J and R,
respectively. If you specify only one option, you can omit the
parentheses.
You must specify the Edit qualifier to use this qualifier. If you
specify the Edit qualifier but do not specify the Keep qualifier,
both the journal file for the current editing session and the
journal file used for restoring the ACL are deleted when the
editing session ends.
32.10.4.8 – Like
Like=source-root-file-spec
Indicates that the ACL of the root file specified with the Like
qualifier is to replace the ACL of the root file specified with
the root-file-spec parameter of the RMU Set Privilege command.
Any existing ACEs are deleted before the root file ACL specified
by the Like qualifier is copied.
You cannot use this qualifier with the Edit qualifier.
32.10.4.9 – Log
Log
Nolog
Directs the RMU Set Privilege command to return both the name of
the root file that has been modified by the command and the ACL
associated with the database. The default of Nolog suppresses
this output.
You cannot use this qualifier with the Edit qualifier.
32.10.4.10 – Mode
Mode=[No]Prompt
Determines whether the ACL editor prompts for field values. By
default, the ACL editor selects prompt mode.
You must specify the Edit qualifier to use this qualifier.
32.10.4.11 – New
New
Indicates that any existing ACE in the ACL of the root file
specified with RMU Set Privilege is to be deleted. To use the
New qualifier, you must specify a new ACL or ACE with the Acl,
Like, or Replace qualifiers.
You cannot use this qualifier with the Edit qualifier.
32.10.4.12 – Recover
Recover[=file-spec]
Norecover
Specifies the name of the journal file to be used in a recovery
operation. If the file specification is omitted with the Recover
qualifier, the journal is assumed to have the same name as the
root file and a file type of .tjl. No wildcard characters are
allowed with the Recover qualifier file-spec parameter.
The default is the Norecover qualifier, where no recovery is
attempted when you invoke the ACL editor to edit a root file ACL.
You must specify Edit to use this qualifier.
32.10.4.13 – Replace
Replace=(ace[,...])
Deletes the ACEs specified with the Acl qualifier and replaces
them with those specified with the Replace qualifier. Any ACEs
specified with the Acl qualifier must exist and must be specified
in the order in which they appear in the ACL.
This qualifier cannot be used with the Edit qualifier.
32.10.5 – Usage Notes
o You must have the RMU$SECURITY privilege in the root file ACL
for a database or the OpenVMS SECURITY or BYPASS privilege
to use the RMU Set Privilege command for the database. The
RMU$SECURITY access is VMS BIT_15 access in the ACE. You can
grant yourself BIT_15 access by using the DCL SET ACL command
if you have (READ+WRITE+CONTROL) access.
o By default, a root file ACL is created for every Oracle Rdb
database. In some cases, the root file ACL may not allow
the appropriate Oracle RMU access for the database to all
Oracle RMU users. In these situations, you must use the RMU
Set Privilege command to modify the root file ACL to give the
appropriate Oracle RMU access to Oracle RMU users. Privileges
Required for Oracle RMU Commands shows the privileges required
to access each Oracle RMU command.
o The root file ACL created by default on each Oracle Rdb
database controls only a user's Oracle RMU access to the
database (by specifying privileges that will allow a user or
group of users access to specific Oracle RMU commands). Root
file ACLs do not control a user's access to the database with
SQL statements.
A user's access to a database with SQL statements is governed
by the privileges granted to the user in the database ACL
(the ACL that is displayed using the SQL SHOW PROTECTION ON
DATABASE command).
o If you find that the root file ACL has changed, or is not
set as expected, it may be because a layered product has
manipulated the OpenVMS directory or file ACLs. This can
result in the unintentional alteration of an Oracle RMU access
right.
For example, Oracle CDD/Repository may use the following ACE:
(IDENTIFIER=[*,*],OPTIONS=DEFAULT+PROPAGATE,ACCESS=NONE)
If this ACE is propagated to an Oracle Rdb database, such
as CDD$DATABASE or CDD$TEMPLATE, OpenVMS privileges may be
required to manage that database. Or, you can use the RMU Set
Privilege command to change the ACL on the affected database.
o If you need to move a database from one system to another, you
should be aware that the identifiers used in the database's
root file ACL on the source system are not likely to be
valid identifiers on the destination system. Thus, if the
database root file ACL from the source system is moved to the
destination system without modification, only those users with
the same identifiers on both systems have the same Oracle RMU
access to the database on the destination system as they had
to the database on the source system.
For example, suppose that the mf_personnel database with the
following root file ACL is moved from its current system to
another node. If the database root file ACL is moved without
modification to the destination node, the users USER, USER2,
USER3, USER4, and USER5 will not have any Oracle RMU access to
the database on the destination node unless they have the same
identities on the destination node.
$ RMU/SHOW PRIVILEGE MF_PERSONNEL.RDB
Object type: file, Object name:SQL_USER:[USER]MF_PERSONNEL.RDB;1,
on 31-MAR-1992 15:48:36.24
(IDENTIFIER=[SQL,USER],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
(IDENTIFIER=[SQL,USER2],ACCESS=RMU$ANALYZE+RMU$OPEN+RMU$VERIFY)
(IDENTIFIER=[SQL,USER3],ACCESS=RMU$SECURITY)
(IDENTIFIER=[RDB,USER4],ACCESS=RMU$BACKUP+RMU$CONVERT+RMU$DUMP+
RMU$RESTORE)
(IDENTIFIER=[RDB,USER5],ACCESS=RMU$LOAD+RMU$SHOW)
(IDENTIFIER=[*,*],ACCESS=NONE)
o The following list describes some ways to move a database from
one node to another and explains what happens to the original
root file ACL in each scenario:
- RMU Restore command
First, use the RMU Backup command to back up the database
on the source node and to create an .rbf file. Then, copy
the .rbf file from the source node to the destination
node. When you use the RMU Restore command to re-create
the database from the source node on the destination node,
the database on the destination node will have the same
root file ACL as the database on the source node. If a
user with the RMU$SECURITY privilege in the root file
ACL on the source node has the same identifier on the
destination node, that user can modify the root file ACL
on the destination node to grant users the privileges they
need for Oracle RMU access to the database. Otherwise, a
user with one of the OpenVMS override privileges (SECURITY
or BYPASS) needs to modify the root file ACL.
- RMU Restore command with the Noacl qualifier
First, use the RMU Backup command to back up the database
on the source node and to create an .rbf file. Then, copy
the .rbf file from the source node to the destination
node. When you use the RMU Restore command with the Noacl
qualifier to re-create the database from the source node on
the destination node, the database on the destination node
is created with an empty root file ACL. A user with one of
the OpenVMS override privileges (SECURITY or BYPASS) needs
to modify the root file ACL to grant users the privileges
they need for Oracle RMU access to the database.
- SQL IMPORT statement
First, use the SQL EXPORT statement on the source node
to create an .rbr file. Then, copy the .rbr file from the
source node to the destination node. When you use the SQL
IMPORT statement on the destination node, the imported
database is created with the same root file ACL as existed
on the database on the source node. If a user with the
RMU$SECURITY privilege in the root file ACL on the source
node has the same identifier on the destination node, that
user can modify the root file ACL on the destination node
to grant users the privileges they need for Oracle RMU
access to the database. Otherwise, a user with one of the
OpenVMS override privileges (SECURITY or BYPASS) needs to
modify the root file ACL to grant users the privileges they
need for Oracle RMU access to the database.
- SQL IMPORT NO ACL statement
First, use the SQL EXPORT statement on the source node to
create an .rbr file. Then, copy the .rbr file from the
source node to the destination node. When you use the
SQL IMPORT NO ACL statement on the destination node, the
imported database is created with a root file ACL that
contains one ACE. The single ACE will grant the OpenVMS
READ, WRITE, and CONTROL privileges plus all the Oracle RMU
privileges to the user who performed the IMPORT operation.
The user who performed the IMPORT operation can modify the
root file ACL to grant users the privileges they need for
Oracle RMU access to the database.
32.10.6 – Examples
Example 1
The following example assumes that the user with a user
identification code (UIC) of [SQL,USER] has created the mf_
test_db database and is therefore the owner of the database.
After creating the mf_test_db database, the owner displays the
root file ACL for the database. Then the owner grants Oracle RMU
privileges to database users. The Oracle RMU privileges granted
to each type of user depend on the type of Oracle RMU access the
user needs to the database.
$! Note that by default the owner (the user with a UIC of [SQL,USER])
$! is granted all the Oracle RMU privileges in the root file
$! ACL and no other users are granted any Oracle RMU privileges.
$ RMU/SHOW PRIVILEGE MF_TEST_DB.RDB
Object type: file, Object name: SQL_USER:[USER]MF_TEST_DB.RDB;1,
on 30-MAR-1996 15:51:55.79
(IDENTIFIER=[SQL,USER],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
$!
$! The owner uses the RMU Set Privilege command and the After
$! qualifier to grant the RMU$ANALYZE, RMU$OPEN, and
$! RMU$VERIFY privileges to a user with a UIC of [SQL,USER2].
$! This user will serve as the database administrator for the
$! mf_test_db database.
$ RMU/SET PRIVILEGE/ACL=(IDENTIFIER=[SQL,USER2],ACCESS=RMU$ANALYZE -
_$ +RMU$OPEN+RMU$VERIFY) -
_$ /AFTER=(IDENTIFIER=[SQL,USER])/LOG MF_TEST_DB.RDB
%RMU-I-MODIFIED, SQL_USER:[USER]MF_TEST_DB.RDB;1 modified
$!
$! Next, the owner grants the RMU$SECURITY privilege to a user with a
$! UIC of [SQL,USER3]. This gives the user USER3 the ability
$! to grant other users the appropriate privileges they need for
$! accessing the database with Oracle RMU commands. Because both
$! the database creator and user USER3 have the RMU$SECURITY
$! privilege, both of them can modify the root file ACL for the
$! database.
$ RMU/SET PRIVILEGE/ACL=(IDENTIFIER=[SQL,USER3],ACCESS=RMU$SECURITY) -
_$ /AFTER=(IDENTIFIER=[SQL,USER2])/LOG MF_TEST_DB.RDB
%RMU-I-MODIFIED, SQL_USER:[USER]MF_TEST_DB.RDB;1 modified
$!
$! The user with a UIC of [RDB,USER4], who will serve as the database
$! operator, is granted the RMU$BACKUP, RMU$CONVERT, RMU$DUMP, and
$! RMU$RESTORE privileges:
$ RMU/SET PRIVILEGE/ACL=(IDENTIFIER=[RDB,USER4],ACCESS=RMU$BACKUP -
_$ +RMU$CONVERT+RMU$DUMP+RMU$RESTORE) -
_$ /AFTER=(IDENTIFIER=[SQL,USER3])/LOG MF_TEST_DB.RDB
%RMU-I-MODIFIED, SQL_USER:[USER]MF_TEST_DB.RDB;1 modified
$!
$! The RMU$LOAD and RMU$SHOW privileges are granted to the user
$! with a UIC of [RDB,USER5]. This user will be writing programs
$! that load data into the database.
$ RMU/SET PRIVILEGE/ACL=(IDENTIFIER=[RDB,USER5],ACCESS=RMU$LOAD -
_$ +RMU$SHOW) /AFTER=(IDENTIFIER=[RDB,USER4]) MF_TEST_DB.RDB
%RMU-I-MODIFIED, SQL_USER:[USER]MF_TEST_DB.RDB;1 modified
$!
$! No privileges are granted to all other users.
$ RMU/SET PRIVILEGE/ACL=(IDENTIFIER=[*,*],ACCESS=NONE) -
_$ /AFTER=(IDENTIFIER=[RDB,USER5])/LOG MF_TEST_DB.RDB
%RMU-I-MODIFIED, SQL_USER:[USER]MF_TEST_DB.RDB;1 modified
$!
$! The RMU/SHOW PRIVILEGE command displays the root file ACL for the
$! mf_test_db database.
$ RMU/SHOW PRIVILEGE MF_TEST_DB.RDB
Object type: file, Object name: SQL_USER:[USER]MF_TEST_DB.RDB;1,
on 30-MAR-1996 15:52:17.03
(IDENTIFIER=[SQL,USER],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
(IDENTIFIER=[SQL,USER2],ACCESS=RMU$ANALYZE+RMU$OPEN+RMU$VERIFY)
(IDENTIFIER=[SQL,USER3],ACCESS=RMU$SECURITY)
(IDENTIFIER=[RDB,USER4],ACCESS=RMU$BACKUP+RMU$CONVERT+RMU$DUMP+
RMU$RESTORE)
(IDENTIFIER=[RDB,USER5],ACCESS=RMU$LOAD+RMU$SHOW)
(IDENTIFIER=[*,*],ACCESS=NONE)
Example 2
The following command adds an ACE for the user with a UIC of
[RDB,USER1] to the root file ACL for the personnel database. This
ACE grants [RDB,USER1] the RMU$BACKUP privilege for the personnel
database. The RMU$BACKUP privilege allows user [RDB,USER1]
to access the RMU Backup, RMU Backup After_Journal, and RMU
Checkpoint commands for the personnel database.
$ RMU/SET PRIVILEGE/ACL=(IDENTIFIER=[RDB,USER1],ACCESS=RMU$BACKUP) -
_$ PERSONNEL.RDB
Example 3
The Replace qualifier in the following example causes the ACE
in the root file ACL for the user with a UIC of [RDB,USER4]
to be replaced by the ACE specified for the user with a UIC of
[SQL,USER6]:
$ RMU/SET PRIVILEGE/ACL=(IDENTIFIER=[RDB,USER4]) -
_$ /REPLACE=(IDENTIFIER=[SQL,USER6],ACCESS=RMU$BACKUP+RMU$CONVERT -
_$ +RMU$DUMP+RMU$RESTORE)/LOG MF_TEST_DB.RDB
%RMU-I-MODIFIED, SQL_USER:[USER]MF_TEST_DB.RDB;1 modified
$!
$ RMU/SHOW PRIVILEGE MF_TEST_DB.RDB
Object type: file, Object name: SQL_USER:[USER]MF_TEST_DB.RDB;1,
on 30-MAR-1996 15:52:23.92
(IDENTIFIER=[SQL,USER],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
(IDENTIFIER=[SQL,USER2],ACCESS=RMU$ANALYZE+RMU$OPEN+RMU$VERIFY)
(IDENTIFIER=[SQL,USER3],ACCESS=RMU$SECURITY)
(IDENTIFIER=[SQL,USER6],ACCESS=RMU$BACKUP+RMU$CONVERT+RMU$DUMP+
RMU$RESTORE)
(IDENTIFIER=[RDB,USER5],ACCESS=RMU$LOAD+RMU$SHOW)
(IDENTIFIER=[*,*],ACCESS=NONE)
Example 4
The Delete qualifier in the following example causes the ACE for
the user with a UIC of [RDB,USER5] to be deleted from the root
file ACL for the mf_test_db database:
$ RMU/SET PRIVILEGE/ACL=(IDENTIFIER=[RDB,USER5]) -
_$ /DELETE/LOG MF_TEST_DB.RDB
%RMU-I-MODIFIED, SQL_USER:[USER]MF_TEST_DB.RDB;1 modified
$!
$ RMU/SHOW PRIVILEGE MF_TEST_DB.RDB
Object type: file, Object name: SQL_USER:[USER]MF_TEST_DB.RDB;1,
on 30-MAR-1996 15:52:29.07
(IDENTIFIER=[SQL,USER],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
(IDENTIFIER=[SQL,USER2],ACCESS=RMU$ANALYZE+RMU$OPEN+RMU$VERIFY)
(IDENTIFIER=[SQL,USER3],ACCESS=RMU$SECURITY)
(IDENTIFIER=[SQL,USER6],ACCESS=RMU$BACKUP+RMU$CONVERT+RMU$DUMP+
RMU$RESTORE)
(IDENTIFIER=[*,*],ACCESS=NONE)
Example 5
In the following example, the Like qualifier copies the root file
ACL from the mf_test_db database to the test_db database. As part
of this operation, the original root file ACL for the test_db
database is deleted.
$ RMU/SHOW PRIVILEGE TEST_DB.RDB
Object type: file, Object name: SQL_USER:[USER]TEST_DB.RDB;1, on
30-MAR-1996 15:52:31.48
(IDENTIFIER=[SQL,USER],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
$ !
$ RMU/SHOW PRIVILEGE MF_TEST_DB.RDB
Object type: file, Object name: SQL_USER:[USER]MF_TEST_DB.RDB;1,
on 30-MAR-1996 15:52:33.86
(IDENTIFIER=[SQL,USER],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
(IDENTIFIER=[SQL,USER2],ACCESS=RMU$ANALYZE+RMU$OPEN+RMU$VERIFY)
(IDENTIFIER=[SQL,USER3],ACCESS=RMU$SECURITY)
(IDENTIFIER=[SQL,USER6],ACCESS=RMU$BACKUP+RMU$CONVERT+RMU$DUMP+
RMU$RESTORE)
(IDENTIFIER=[*,*],ACCESS=NONE)
$!
$ RMU/SET PRIVILEGE/LIKE=MF_TEST_DB.RDB/LOG TEST_DB.RDB
%RMU-I-MODIFIED, SQL_USER:[USER]TEST_DB.RDB;1 modified
$!
$ RMU/SHOW PRIVILEGE TEST_DB.RDB
Object type: file, Object name: SQL_USER:[USER]TEST_DB.RDB;1, on
30-MAR-1996 15:52:41.36
(IDENTIFIER=[SQL,USER],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
(IDENTIFIER=[SQL,USER2],ACCESS=RMU$ANALYZE+RMU$OPEN+RMU$VERIFY)
(IDENTIFIER=[SQL,USER3],ACCESS=RMU$SECURITY)
(IDENTIFIER=[SQL,USER6],ACCESS=RMU$BACKUP+RMU$CONVERT+RMU$DUMP+
RMU$RESTORE)
(IDENTIFIER=[*,*],ACCESS=NONE)
Example 6
The New qualifier in the following example deletes all the
existing ACEs and the Acl qualifier specifies a new ACE for the
root file ACL for the mf_test_db database. Note that after the
RMU Set Privilege command in this example is issued, only the
user with a UIC of [SQL,USER2] or a user with an OpenVMS override
privilege would be able to display the root file ACL for the mf_
test_db database.
$ RMU/SHOW PRIVILEGE MF_TEST_DB.RDB
Object type: file, Object name: SQL_USER:[USER]MF_TEST_DB.RDB;1,
on 30-MAR-1996 15:52:44.50
(IDENTIFIER=[SQL,USER],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
(IDENTIFIER=[SQL,USER2],ACCESS=RMU$ANALYZE+RMU$OPEN+RMU$VERIFY)
(IDENTIFIER=[SQL,USER3],ACCESS=RMU$SECURITY)
(IDENTIFIER=[SQL,USER6],ACCESS=RMU$BACKUP+RMU$CONVERT+RMU$DUMP+
RMU$RESTORE)
(IDENTIFIER=[*,*],ACCESS=NONE)
$!
$ RMU/SET PRIVILEGE/NEW -
_$ /ACL=(IDENTIFIER=[SQL,USER2],ACCESS=READ+WRITE+CONTROL+ -
_$ RMU$ALTER+RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+ -
_$ RMU$DUMP+RMU$LOAD+RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SHOW+ -
_$ RMU$UNLOAD+RMU$VERIFY)/LOG MF_TEST_DB.RDB
%RMU-I-MODIFIED, SQL_USER:[USER]MF_TEST_DB.RDB;1 modified
32.11 – Row Cache
Allows you to enable or disable the database Row Cache feature
and to modify certain parameters on a per-cache basis.
32.11.1 – Description
You can use the RMU Set Row_Cache command to allow the database
Row Cache feature to be enabled or disabled without requiring
that the database be opened.
You can also use the Alter parameter to make modifications to one
cache at a time.
32.11.2 – Format
(B)0[mRMU/Set Row_Cache root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Alter=(Name=cache-name,option(,...)) x See Description
/Backing_Store_Location=devdir x See Description
/NoBacking_Store_Location x See Description
/Disable x None
/Enable x None
/[No]Log x Current DCL verify value
/Sweep_Interval=n x See Description
/[No]Sweep_Interval x See Description
32.11.3 – Parameters
32.11.3.1 – root-file-spec
Specifies the database root file for which you want to modify the
Row Cache feature.
32.11.4 – Command Qualifiers
32.11.4.1 – Alter
Alter=(Name=cachename,option(, ...))
Specifies the action to take on the named cache. You must
specify the cache name and at least one other option. The /Alter
qualifier may be specified multiple times on the command line.
Each /Alter qualifier specified operates on one unique cache
if no wildcard character (% or *) is specified. Otherwise, each
/Alter operates on all matching cache names.
o Name=cachename
Name of the cache to be modified. The cache must already be
defined in the database. You must specify the cache name
if you use the Alter qualifier. This parameter accepts the
wildcard characters asterisk (*) and percent sign (%).
o Backing_Store_Location=devdir
Specifies the name of the cache-specific default directory to
which row cache backing file information is written for the
specified cache. The database system generates a file name
(row-cache-name.rdc) automatically for each row cache backing
file it creates when the RCS process starts. Specify a device
name and directory name; do not include a file specification.
By default, the location is the directory of the database
root file unless a database-specific default directory or a
cache-specific default directory has been set.
o NoBacking_Store_Location
Specifies that there is no cache-specific default directory
to which row cache backing file information is written for the
specified cache.
o Drop
Specifies that the indicated row cache is to be dropped
(deleted) from the database.
o Shared_Memory=keyword
Specifies the shared memory type and parameters for the cache.
Valid keywords are:
- Type=option
Specify one of the following options:
* Process
Specifies traditional shared memory global section,
which means that the database global section is located
in process (P0) address space and may be paged from the
process working set as needed.
* Resident
Specifies that the database global section is memory
resident in process (P0) address space using shared
page tables. This means that the global section is fully
resident, or pinned, in memory, and uses less physical
and virtual memory (for process page tables) than a
traditional shared memory global section.
- Rad_Hint=n
NoRad_Hint
Indicates a request that memory should be allocated from
the specified OpenVMS Resource Affinity Domain (RAD). This
keyword specifies a hint to Oracle Rdb and OpenVMS about
where memory should be physically allocated. It is possible
that if the requested memory is not available, it will be
allocated from other RADs in the system. For systems that
do not support RADs, no Rad_Hint specification is valid.
The Rad_Hint keyword is valid only when the shared memory
type is set to Resident. If you set the shared memory type
to System or Process, you disable any previously defined
RAD hint.
Use Norad_Hint to disable the RAD hint.
o Slot_Count=n
Specifies the number of slots in the cache.
o Slot_Size=n
Specifies the size (in bytes) of each slot in the cache.
o Snapshot_Slot_Count=n
Specifies the number of snapshot slots in the cache. A value
of zero disables the snapshot portion for the specified cache.
o Sweep_Interval=n
Specifies the periodic cache sweep timer interval in seconds.
Valid values are from 1 to 3600.
o NoSweep_Interval
Disables the periodic cache sweep timer interval.
o Working_Set_Count=n
Specifies the number of working set entries for the cache.
Valid values are from 1 to 100.
32.11.4.2 – Backing Store Location=devdir
Specifies the name of the database-specific default directory to
which row cache backing file information is written. The database
system generates a file name (row-cache-name.rdc) automatically
for each row cache backing file it creates when the RCS process
starts up. Specify a device name and directory name; do not
include a file specification. The file name is the row-cache-name
specified when creating the row cache. By default, the location
is the directory of the database root file unless a database-
specific default directory or a cache-specific default directory
has been set.
32.11.4.3 – Disable
Disables row caching. Do not use with the Enable qualifier.
32.11.4.4 – Enable
Enables row caching. Do not use with the Disable qualifier.
32.11.4.5 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request log output and
the Nolog qualifier to prevent it. If you specify neither, the
default is the current setting of the DCL verify switch.
32.11.4.6 – NoBacking Store Location
Specifies that there is no database-specific default directory to
which row cache backing file information is written.
32.11.5 – Usage Notes
o This command requires exclusive database access (the database
cannot be open or accessed by other users).
o The Alter qualifier can be specified multiple times on the
command line. Each use of the qualifier operates on a unique
cache.
o Only one value can be supplied with the Rad_Hint keyword. The
indicated RAD must contain memory.
o When shared memory is set to System (with Galaxy enabled) or
to Resident, then the process that opens the database must be
granted the VMS$MEM_RESIDENT_USER identifier.
o For applications that can be partitioned into one or more
RADs, the Rad_Hint qualifier allows additional control
over exactly where memory for caches and global sections
is allocated. This control can permit increased performance
if all application processes run in the same RAD, and the
database and row cache global sections also reside in that
same RAD.
o When Resident shared memory is specified, the global demand-
zero pages are always resident in memory and are not backed
up by any file on any disk. The pages are not placed into the
process's working set list when the process maps to the global
section and the virtual memory is referenced by the process.
The pages are also not charged against the process's working
set quota or against any page-file quota.
o To save physical memory, Oracle Rdb generally attempts to
create and use shared page tables when creating large resident
global sections.
o The total number of rows for any individual cache (the
combination of live rows and snapshot rows) is limited to
2,147,483,647.
32.11.6 – Examples
Example 1
The following example sets the slot count on cache "mycache".
$ RMU/SET ROW_CACHE/ALTER=(NAME=mycache, SLOT_COUNT=8888)
Example 2
This command disables all caches.
$ RMU/SET ROW_CACHE/DISABLE
Example 3
The following sample specifies that cache "cache2" should use RAD
2.
$ RMU/SET ROW_CACHE/ALTER=(NAME=cache2, SHARED_MEM=(TYPE=RESIDENT, -
_$ RAD_HINT=2)
Example 4
This example drops cache "seacache".
$ RMU/SET ROW_CACHE/ALTER=(NAME=seacache, DROP)
Example 5
This example shows multiple uses of the Alter qualifier.
$ RMU /SET ROW_CACHE MF_PERSONNEL/ALTER=(NAME = RDB$SYS_CACHE, -
_$ SLOT_COUNT = 800, WINDOW_COUNT = 25) -
_$ /ALTER= (NAME = RESUMES,SLOT_SIZE=500,WORKING_SET_COUNT = 15)
Example 6
The following example modifies the database MYDB to set the
snapshot slot count for the cache EMPL_IDX to 25000 slots and
disables snapshots in cache for the SALES cache.
$ RMU /SET ROW_CACHE DGA0:[DB]MYDB.RDB -
_$ /ALTER=(NAME=EMPL_IDX, SNAPSHOT_SLOT_COUNT=25000) -
_$ /ALTER=(NAME=SALES, SNAPSHOT_SLOT_COUNT=0)
Example 7
The following example alters two caches:
$ RMU /SET ROW_CACHE MF_PERSONNEL -
/ALTER= ( NAME = RDB$SYS_CACHE,
SLOT_COUNT = 800) -
/ALTER= ( NAME = RESUMES, -
SLOT_SIZE=500, -
WORKING_SET_COUNT = 15)
Example 8
The following command alters caches named FOOD and FOOT (and
any other cache with a 4 character name with the first three
characters of "FOO" defined in the database):
$ RMU /SET ROW_CACHE MF_PERSONNEL -
/ALTER= ( NAME = FOO%,
BACKING_STORE_LOCATION=DISK$RDC:[RDC])
32.12 – Server
Allows you to identify output files for several database server
processes.
32.12.1 – Description
You can use the Set Server/Output command to identify output log
file names and locations for various database server processes.
The following table shows valid values for the server-type
parameter and the corresponding logical name.
Table 15 Server Types and Logical Names
Server-
Server type Logical Name
AIJ Backup Server ABS RDM$BIND_BIND_ABS_OUTPUT_FILE
AIJ Log Server ALS RDM$BIND_BIND_ALS_OUTPUT_FILE
AIJ Log Roll- LRS RDM$BIND_LRS_OUTPUT_FILE
Forward Server
AIJ Log Catch-Up LCS RDM$BIND_LCS_OUTPUT_FILE
Server
Database Recovery DBR RDM$BIND_DBR_LOG_FILE
Server
Row Cache Server RCS RDM$BIND_RCS_LOG_FILE
If the output file specification is empty (Output=""), the
log file information for that server will be deleted from the
database.
If an existing logical name specifies an output file name for the
specified server process, it takes precedence over the file name
designated in the Set Server/Output command.
32.12.2 – Format
(B)0[m RMU/Set Server server-type root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Log x None
/Output=file-name x None
32.12.3 – Parameters
32.12.3.1 – server-type
Identifies the server process for which you want to log output.
Refer to the Server Types and Logical Names table in the
Description topic for a list of valid server types and their
corresponding logical names.
32.12.3.2 – root-file-spec
Specifies the database root file for which you want to specify
the server process output file.
32.12.4 – Command Qualifiers
32.12.4.1 – Log
Displays a log message at the completion of the RMU Set command.
32.12.4.2 – Output
Identifies the output log file for several database server
processes.
32.12.5 – Examples
Example 1
This example specifies the output file for the row cache server
and displays a log message when the procedure finishes.
$ RMU /SET SERVER RCS /OUTPUT=RCS_PID.LOG /LOG DUA0:[DB]MYDB.RDB
Example 2
This example specifies the output file for the AIJ log server.
$ RMU /SET SERVER ALS /OUTPUT=ALS$LOGS:ALS_DB1.LOG DUA0:[DB1]MFP.RDB
Example 3
This example deletes the log file information in the database for
the AIJ log roll-forward server.
$ RMU /SET SERVER LRS /OUTPUT="" DUA0:[ZDB]ZDB.RDB
Example 4
This example specifies the output file for the database recovery
server.
$ RMU /SET SERVER DBR /OUTPUT=DBR$LOGS:DBR.LOG DUA0:[ADB]ADB.RDB
32.13 – Shared Memory
Allows you to alter the database shared memory configuration
without requiring that the database be open.
32.13.1 – Description
You can use the RMU Set Shared_Memory command to alter the
database shared memory configuration without requiring that the
database be open.
32.13.2 – Format
(B)0[mRMU/Set Shared_Memory root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Log x Current DCL verify value
/[No]Rad_Hint=n x None
/Type={Process|Resident|System} x None
32.13.3 – Parameters
32.13.3.1 – root-file-spec
Specifies the database root file for which you want to modify the
shared memory configuration.
32.13.4 – Command Qualifiers
32.13.4.1 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. Specify the Log qualifier to request log output and
the Nolog qualifier to prevent it. If you specify neither, the
default is the current setting of the DCL verify switch.
32.13.4.2 – Rad Hint
Rad_Hint=n
Norad_Hint
Indicates a request that memory should be allocated from the
specified OpenVMS Alpha Resource Affinity Domain (RAD). This
qualifier specifies a hint to Oracle Rdb and OpenVMS about where
memory should be physically allocated. It is possible that if
the requested memory is not available, it will be allocated from
other RADs in the system. For systems that do not support RADs, a
Rad_Hint value of zero is valid.
The Rad_Hint qualifier is only valid when the shared memory type
is set to Resident. If you set the shared memory type to System
or Process, you disable any previously defined RAD hint.
Use the Norad_Hint qualifier to disable the RAD hint.
NOTE
OpenVMS support for RADs is available only on the
AlphaServer GS series systems. For more information about
using RADs, refer to the OpenVMS Alpha Partitioning and
Galaxy Guide.
32.13.4.3 – Type
Type=option
If you use the Type qualifier, you must specify one of the
following options:
o Process
Specifies traditional shared memory global section, which
means that the database global section is located in process
(P0) address space and may be paged from the process working
set as needed.
o Resident
Specifies that the database global section is memory resident
in process (P0) address space using OpenVMS Alpha shared page
tables. This means that the global section is fully resident,
or pinned, in memory, and uses less physical and virtual
memory (for process page tables) than a traditional shared
memory global section.
o System
Specifies that the database global section is located in
OpenVMS Alpha system space, which means that the section is
fully resident, or pinned, in memory, does not use process
(P0) address space, and does not affect the quotas of the
working set of a process.
32.13.5 – Usage Notes
o This command requires exclusive database access (the database
cannot be open or accessed by other users).
o Only one value can be supplied to the Rad_Hint qualifier. The
indicated RAD must contain memory.
o When shared memory is set to System (with Galaxy enabled) or
to Resident, then the process that opens the database must be
granted the VMS$MEM_RESIDENT_USER identifier.
o For applications that can be partitioned into one or more
RADs, the Rad_Hint qualifier allows additional control
over exactly where memory for caches and global sections
is allocated. This control can permit increased performance
if all application processes run in the same RAD, and the
database and row cache global sections also reside in that
same RAD.
o When Resident shared memory is specified, the global demand-
zero pages are always resident in memory and are not backed
up by any file on any disk. The pages are not placed into the
process's working set list when the process maps to the global
section and the virtual memory is referenced by the process.
The pages are also not charged against the process's working
set quota or against any page-file quota.
o To save physical memory, Oracle Rdb generally attempts to
create and use shared page tables when creating large resident
global sections.
32.13.6 – Examples
Example 1
The following example sets the memory type to Resident and
requests that it be put in RAD 4.
$ RMU/SET SHARED_MEMORY/TYPE=RESIDENT/RAD_HINT=4
Example 2
This example specifies that system space buffers are to be used.
$ RMU/SET SHARED_MEMORY/TYPE=SYSTEM
Example 3
The following example specifies that process address space shared
memory is to be used.
$ RMU/SET SHARED_MEMORY/TYPE=PROCESS/LOG
33 – Show
Displays current information about security audit
characteristics, version numbers, active databases, active users,
active recovery-unit files, after-image journal files, area
inventory pages, corrupt areas and pages, optimizer statistics,
or database statistics related to database activity on your node.
Note that, with the exception of the RMU Show Locks and RMU Show
Users commands, the RMU Show commands display information for
your current node only in a clustered environment.
Oracle RMU provides the following Show commands:
After_Journal
AIP
Audit
Corrupt_Pages
Locks
Optimizer_Statistics
Privilege
Statistics
System
Users
Version
Each show command is described in a separate section.
33.1 – After Journal
Displays the after-image journal configuration in the form
required for the Aij_Options qualifier. You can use the Aij_
Options qualifier with the RMU Copy_Database, RMU Move_Area,
RMU Restore, RMU Restore Only_Root, and RMU Set After_Journal
commands.
Optionally, this command initializes the RDM$AIJ_BACKUP_SEQNO,
RDM$AIJ_COUNT, RDM$AIJ_CURRENT_SEQNO, RDM$AIJ_ENDOFFILE, RDM$AIJ_
FULLNESS, RDM$AIJ_LAST_SEQNO, RDM$AIJ_NEXT_SEQNO, and RDM$AIJ_
SEQNO global process symbols.
NOTE
Prior to Oracle Rdb Version 6.0, the ability to display an
.aij specification was provided through the Rdbalter Display
Root command. The Rdbalter Display Root command no longer
provides this capability.
33.1.1 – Description
The output of the RMU Show After_Journal command appears in the
form shown in Output from the RMU Show After_Journal Command.
This is the form required by the Aij_Options qualifier for the
RMU Copy_Database, Move_Area, and Restore commands. When you
issue the RMU Show After_Journal command, you may see fewer items
than shown in Output from the RMU Show After_Journal Command;
some options do not appear unless you specified them when you
created your after image journal file configuration (for example,
with the RMU Set After_Journal command).
Figure 1 Output from the RMU Show After_Journal Command
(B)0[mJournal [Is] {Enabled | Disabled} -
[Reserve n] -
[Allocation [Is] n] -
[Extent [Is] n] -
[Overwrite [Is] {Enabled|Disabled}] -
[Shutdown_Timeout [Is] n] -
[Notify [Is] {Enabled|Disabled}] -
[Backups [Are] {Manual|Automatic} -
[[No]Quiet_Point] [File filename]] -
[Cache [Is] {Enabled File filename|Disabled}]
Add [Journal] journal-name -
! File file-specification
File filename -
[Allocation [Is] n] -
[Backup_File filename] -
[Edit_String [Is] (edit-string-options)]
When you use the output from the Show After_Journal command as a
template for the Aij_Options qualifier of the RMU Copy_Database,
Move_Area, and Restore commands, note the following regarding the
syntax:
o As shown in Output from the RMU Show After_Journal Command,
you can use the DCL continuation character (-) at the
end of each line in the Journal and Add clauses. Although
continuation characters are not required if you can fit each
clause (Journal or Add clause) on a single line, using them
might improve readability.
o The Journal Is clause must precede the Add clause.
o Because the Journal clause and the Add clause are two separate
clauses, a continuation character should not be used between
the last option in the Journal clause and the Add clause (or
clauses).
o The journal options file can contain one Journal clause only,
but it can contain several Add clauses. However, the number of
Add clauses cannot exceed the number of reservations made for
.aij files. In addition, if you are enabling journaling, you
must add at least one journal.
o You can specify only one of each option (for example, one
Extent clause, one Cache clause, and so on) for the Journal Is
clause.
The clauses and options have the following meaning:
o Journal Is Enabled
Enables after-image journaling. At least one Add clause must
follow. If this option is omitted, the current journaling
state is maintained.
o Journal Is Disabled
Disables after-image journaling. You can specify other options
or Add clauses but they do not take effect until journaling
is enabled. The Add clause is optional. If this option is
omitted, the current journaling state is maintained.
o Reserve n
Allocates space for an .aij file name for a maximum of n .aij
files. By default, no reservations are made. Note that you
cannot reserve space in a single-file database for .aij files
by using this option with the RMU Move_Area command with the
Aij_Options qualifier. After-image journal file reservations
for a single-file database can be made only when you use the
RMU Convert, RMU Restore, or RMU Copy_Database commands.
o Allocation Is n
Specifies the size (in blocks) of each .aij file. If this
option is omitted, the default allocation size is 512 blocks.
The maximum allocation size you can specify is eight million
blocks.
See the Oracle Rdb Guide to Database Maintenance for guidance
on setting the allocation size.
o Extent Is n
Specifies the maximum size to extend an .aij journal if it is,
or becomes, an extensible .aij journal (in blocks). (If the
number of available after-image journal files falls to one,
extensible journaling is employed.)
If there is insufficient free space on the .aij journal
device, the journal is extended using a smaller extension
value than specified. However, the minimum, and default,
extension size is 512 blocks.
See the Oracle Rdb Guide to Database Maintenance for guidance
on setting the extent size.
o Overwrite Is Enabled
Enables overwriting of journals before they have been backed
up. If this option is omitted, overwriting is disabled.
This option is ignored if only one .aij file is available.
When you specify the Overwrite Is Enabled option it is
activated only when two or more .aij files are, or become,
available.
o Overwrite Is Disabled
Disables overwriting of journals before they have been backed
up. If this option is omitted, overwriting is disabled.
o Shutdown_Timeout Is n
Sets the delay from the time a journal failure is detected
until the time the database aborts all access and shuts itself
down. The value n is in minutes.
If this option is omitted, the shutdown timeout is 60 minutes.
The maximum value you can specify is 4320 minutes.
o Notify Is Enabled
Enables operator notification when the journal state changes.
If this option is omitted, operator notification is disabled.
o Notify Is Disabled
Disables operator notification when the journal state changes.
If this option is omitted, operator notification is disabled.
o Backups Are Manual
Automatic backup operations are not enabled. This is the
default behavior.
o Backups Are Automatic [File filename]
Automatic backup operations are triggered by the filling of
a journal. The backup file will have the specified file name
unless a different file name or an edit string is specified in
the Add clause. If this option is omitted, backup operations
are manual.
o Edit String Is (edit-string-options)
Specifies a default edit string to apply to the backup file
when an .aij is backed up automatically. See the description
of the Edit_Filename keyword in Set After_Journal for a
description of the available options. An Edit_String that
appears with the definition of an added journal takes
precedence over this edit string.
o Quiet_Point
Specifies that the after-image journal backup operation is
to acquire the quiet-point lock prior to performing an .aij
backup operation for the specified database.
o Noquiet_Point
Specifies that the after-image journal backup operation will
not acquire the quiet-point lock prior to performing an .aij
backup operation for the specified database.
o Cache Is Enabled File filename
Specifies that a journal cache file should be used. The cache
file must reside on a nonvolatile solid-state disk. If it
does not, caching is ineffectual. See Set After_Journal
for information on what happens if the cache file becomes
inaccessible.
By default, caching is disabled.
o Cache Is Disabled
Specifies that a journal cache file should not be used. This
is the default behavior.
o The Add clause or clauses specify the name and location of the
journal file and the backup file generated by automatic backup
operations as follows:
- Add [Journal] journal-name
Specifies the name for the after-image journal file
described in the Journal clause. The journal-name is the
name of the journal object. A journal object is the journal
file specification plus all the attributes (allocation,
extent, and so on) given to it in the journal clause.
- ! File file-specification
Provides the full file specification and version number of
the .aij file named in the Add clause. This line of output
is provided because the next line (File filename) provides
the string that the user entered when he or she created
the .aij file. For example, if the user entered a file name
only, and this line of output was not provided, you would
have to issue the RMU Dump command to determine in which
directory the file resides.
- File filename
Specifies the file name for the .aij file being added. This
option is required.
- Allocation Is n
Specifies the size of the .aij file (in blocks). If this
option is omitted, the default allocation size is 512
blocks.
See the Oracle Rdb Guide to Database Maintenance for
guidance on setting the allocation size.
- Backup_File filename
Specifies the backup file name for automatic backup
operations. Note that it is not valid to specify a Backup_
File clause in the Add clause if you have specified Backups
Are Manual in the Journal clause; Oracle RMU returns an
error if you attempt to do so.
- Edit String Is (edit-string-options)
Specifies an edit string to apply to the backup file when
the .aij is backed up automatically. See the description
of the Edit_Filename keyword in Set After_Journal for a
description of the available keywords.
33.1.2 – Format
(B)0[m RMU/Show After_Journal root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Backup_Context x /Nobackup_Context
/Output[=file-name] x SYS$OUTPUT
33.1.3 – Parameters
33.1.3.1 – root-file-spec
The root file specification of the database for which you want
the after-image journal configuration to be displayed.
33.1.4 – Command Qualifiers
33.1.4.1 – Backup Context
Backup_Context
Nobackup_Context
The Backup_Context qualifier specifies that the following
symbols be initialized, unless you have issued a DCL SET
SYMBOL/SCOPE=(NOLOCAL, NOGLOBAL) command:
o RDM$AIJ_SEQNO
Contains the sequence number of the last .aij backup file
written to tape. This symbol has a value identical to RDM$AIJ_
BACKUP_SEQNO. RDM$AIJ_SEQNO was created prior to Oracle Rdb
Version 6.0 and is maintained for compatibility with previous
versions of Oracle Rdb.
o RDM$AIJ_CURRENT_SEQNO
Contains the sequence number of the currently active .aij
file. A value of -1 indicates that after-image journaling is
disabled.
o RDM$AIJ_NEXT_SEQNO
Contains the sequence number of the next .aij file that
needs to be backed up. This symbol always contains a positive
integer value (which may be 0).
o RDM$AIJ_LAST_SEQNO
Contains the sequence number of the last .aij file available
for a backup operation, which is different from the current
sequence number if fixed-size journaling is being used. A
value of -1 indicates that no journal has ever been backed up.
If the value of the RDM$AIJ_NEXT_SEQNO symbol is greater than
the value of the RDM$AIJ_LAST_SEQNO symbol, then no more .aij
files are currently available for the backup operation.
o RDM$AIJ_BACKUP_SEQNO
Contains the sequence number of the last .aij file backed
up (completed) by the backup operation. This symbol is set
at the completion of an .aij backup operation. A value of -
1 indicates that this process has not yet backed up an .aij
file.
o RDM$AIJ_COUNT
Contains the number of available .aij files.
o RDM$AIJ_ENDOFFILE
Contains the end of file block number for the current AIJ
journal.
o RDM$AIJ_FULLNESS
Contains the percent fullness of the current AIJ journal.
o RDM$HOT_STANDBY_STATE - Contains the current replication
state. Possible state strings and the description of each
state are listed below:
- "Inactive" - Inactive
- "DB_Bind" - Binding to database
- "Net_Bind" - Binding to network
- "Restart" - Replication restart activity
- "Connecting" - Waiting for LCS to connect
- "DB_Synch" - Database synchronization
- "Activating" - LSS server activation
- "SyncCmpltn" - LRS synchronization redo completion
- "Active" - Database replication
- "Completion" - Replication completion
- "Shutdown" - Replication cleanup
- "Net_Unbind" - Unbinding from network
- "Recovery" - Unbinding from database
- "Unknown" - Unknown state or unable to determine state
o RDM$HOT_STANDBY_SYNC_MODE - Contains the current replication
synchronization mode when replication is active. Possible
synchronization mode strings are listed below:
o "Cold"
o "Warm"
o "Hot"
o "Commit"
o "Unknown"
The Nobackup_Context qualifier specifies that the preceding
symbols will not be initialized.
The Nobackup_Context qualifier is the default.
Note that these are string symbols, not integer symbols, even
though their equivalence values are numbers. Therefore performing
arithmetic operations with them produces unexpected results.
If you need to perform arithmetic operations with these symbols,
first convert the string symbol values to numeric symbol values
using the OpenVMS F$INTEGER lexical function. For example:
$ SEQNO_RANGE = F$INTEGER(RDB$AIJ_LAST_SEQNO) -
_$ - F$INTEGER(RDB$AIJ_NEXT_SEQNO)
33.1.4.2 – Output
Output[=file-name]
Specifies the name of the file where output is sent. The default
is SYS$OUTPUT. The default output file extension is .lis, if you
specify only a file name.
33.1.5 – Usage Notes
o To use the RMU Show After_Journal command for a database, you
must have the RMU$BACKUP, RMU$RESTORE, or RMU$VERIFY privilege
in the root file access control list (ACL) for the database or
the OpenVMS SYSPRV or OpenVMS BYPASS privilege.
33.1.6 – Examples
Example 1
The following example shows the output from the RMU Show After_
Journal command when one journal is available, which means
extensible journaling will be used. The commented line is
generated by the RMU Show After_Journal command to display the
full file specification for the added .aij file. The next line
shows the actual file specification entered by the user when he
or she created the .aij file configuration. In this example, the
user did not enter a full specification, therefore only the file
name appears in the uncommented portion of the code.
$ RMU/SHOW AFTER_JOURNAL MF_PERSONNEL
JOURNAL IS ENABLED -
RESERVE 1 -
ALLOCATION IS 512 -
EXTENT IS 512 -
OVERWRITE IS DISABLED -
SHUTDOWN_TIMEOUT IS 60 -
NOTIFY IS DISABLED -
BACKUPS ARE MANUAL -
CACHE IS DISABLED
ADD JOURNAL AIJ_ONE -
! FILE USER2:[JOURNALONE]AIJ1.AIJ;1
FILE AIJ1.AIJ -
BACKUP DISK1:[BACKUP_AIJ]AIJ1BCK.AIJ; -
EDIT_STRING IS (SEQUENCE)
ALLOCATION IS 512
Example 2
The following example shows the output from the RMU Show After_
Journal command when two journal files are enabled, which means
fixed-size journaling will be used. In this example, the user
entered a full file specification for the .aij file when the .aij
file configuration was created. Thus, the commented line and the
one appearing below it are identical with the exception of the
file version:
$ RMU/SHOW AFTER_JOURNAL MF_PERSONNEL
JOURNAL IS ENABLED -
RESERVE 2 -
ALLOCATION IS 512 -
EXTENT IS 512 -
OVERWRITE IS DISABLED -
SHUTDOWN_TIMEOUT IS 60 -
NOTIFY IS DISABLED -
BACKUPS ARE MANUAL -
CACHE IS DISABLED
ADD JOURNAL AIJ_ONE.AIJ -
! FILE DISK2:[AIJ]AIJ1.AIJ;1
FILE DISK2:[AIJ]AIJ1.AIJ -
BACKUP DISK1:[BACKUP_AIJ]AIJ1BCK.AIJ; -
EDIT_STRING IS (SEQUENCE)
ALLOCATION IS 512
ADD JOURNAL AIJ_TWO.AIJ -
! FILE DISK3:[AIJTWO]AIJ2.AIJ;1
FILE DISK3:[AIJTWO]AIJ2.AIJ -
BACKUP DISK1:[BACKUP_AIJ]AIJ2BCK.AIJ; -
EDIT_STRING IS (SEQUENCE)
ALLOCATION IS 512
Example 3
The following example uses the RMU Show After_Journal command
to show the settings of the symbolic names for the .aij sequence
numbers before and after the RMU Backup command is executed:
$ RMU/SHOW AFTER_JOURNAL/BACKUP_CONTEXT MF_PERSONNEL
JOURNAL IS ENABLED -
RESERVE 4 -
ALLOCATION IS 512 -
EXTENT IS 512 -
OVERWRITE IS DISABLED -
SHUTDOWN_TIMEOUT IS 60 -
NOTIFY IS DISABLED -
BACKUPS ARE MANUAL -
CACHE IS DISABLED
ADD JOURNAL AIJ2 -
! FILE DISK2:[DB]AIJ_TWO;1
FILE DISK2:[DB]AIJ_TWO -
ALLOCATION IS 512
ADD JOURNAL AIJ3 -
! FILE DISK3:[DB]AIJ_THREE;1
FILE DISK3:[DB]AIJ_THREE -
ALLOCATION IS 512
$ SHOW SYMBOL RDM$AIJ*
RDM$AIJ_COUNT == "2"
RDM$AIJ_CURRENT_SEQNO == "0"
RDM$AIJ_ENDOFFILE == "1"
RDM$AIJ_FULLNESS == "0"
RDM$AIJ_LAST_SEQNO = "-1"
RDM$AIJ_NEXT_SEQNO = "0"
$ RMU/BACKUP/AFTER MF_PERSONNEL AIJ_TWO, AIJ_THREE
%RMU-I-LOGBCKAIJ, backing up after-image journal RDM$JOURNAL
%RMU-I-AIJBCKSEQ, backing up current after-image journal sequence
number 0
$ RMU/SHOW AFTER_JOURNAL/BACKUP_CONTEXT MF_PERSONNEL
.
.
.
$ SHOW SYMBOL RDM$AIJ*
RDM$AIJ_BACKUP_SEQNO == "-1"
RDM$AIJ_COUNT == "2"
RDM$AIJ_CURRENT_SEQNO = "1"
RDM$AIJ_ENDOFFILE == "1"
RDM$AIJ_FULLNESS == "0"
RDM$AIJ_LAST_SEQNO = "0"
RDM$AIJ_NEXT_SEQNO = "1"
RDM$AIJ_SEQNO == "-1"
33.2 – AIP
Displays the contents of the AIP (Area Inventory Pages)
structure. The AIP structure provides a mapping for logical areas
to physical areas as well as describing each of those logical
areas. Information such as the logical area name, length of the
stored record, storage thresholds and other information can be
displayed using this simple command interface.
33.2.1 – Description
The RMU Show AIP command allows the database administrator to
display details of selected logical areas or all logical areas in
the database.
33.2.2 – Format
(B)0[mRMU/Show AIP root-file-spec [larea-name]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Brief x See description
/Larea=(n [,...]) x See description
/Parea=(n [,...]) x See description
/Option=Rebuild_Spams x See description
/Output=output-filename x /Output=SYS$OUTPUT
/Type=type-name x See description
33.2.3 – Parameters
33.2.3.1 – root-file-spec
The file specification for the database root file to be
processed. The default file extension is .rdb.
33.2.3.2 – larea-name
An optional parameter that allows the logical areas to be
selected by name. Only those AIP entries are displayed. This
parameter is optional and will default to all logical areas being
displayed.
Any partitioned index or table will create multiple logical areas
all sharing the same name. This string may contain standard
OpenVMS wildcard characters (% and *) so that different names
can be matched. Therefore, it is possible for many logical areas
to match this name.
The value of larea-name may be delimited so that mixed case
characters, punctuation and various character sets can be used.
33.2.4 – Command Qualifiers
33.2.4.1 – Brief
Brief
Displays AIP information in a condensed, tabular form (see
example below).
33.2.4.2 – Larea
Larea=(n [,...])
Specifies a list of logical area identifiers. The LAREA qualifier
and larea-name parameter are mutually exclusive. The default
if neither the LAREA or PAREA qualifiers nor the larea-name
parameter is specified is to display all AIP entries.
33.2.4.3 – Parea
Parea=(n [,...])
Specifies a list of physical area identifiers. The PAREA
qualifier and larea-name parameter are mutually exclusive. The
default if neither the PAREA or LAREA qualifiers nor the larea-
name parameter is specified is to display all AIP entries.
33.2.4.4 – Option
Option=REBUILD_SPAMS
Display only those logical areas which have the REBUILD_SPAMS
flag set.
33.2.4.5 – Output
Output [ = output-filename ]
This qualifier is used to capture the output in a named file. If
used, a standard RMU header is added to identify the command and
database being processed. If omitted, the output is written to
SYS$OUTPUT and no header is displayed.
33.2.4.6 – Type
Type = type-name
Legal values for type-name are TABLE, SORTED_INDEX, HASH_INDEX,
LARGE_OBJECT, and SYSTEM_RECORD.
This qualifier is used in conjunction with larea-name to select
a subset of the AIP entries that may match a name. For instance,
it is legal in Rdb to create a table and an index with the name
EMPLOYEES. So using EMPLOYEES/TYPE=TABLE will make the selection
unambiguous. It also allows simpler wildcarding. Commands using
*EMPLOYEE*/TYPE=TABLE will process only those tables that match
and not the associated index logical areas.
33.2.5 – Usage Notes
o The database administrator requires RMU$DUMP privilege as
this command is closely related to the RMU DUMP LAREA=RDB$AIP
command.
o Only AIP entries that are in use are displayed. In contrast,
the RMU Dump command also displays deleted and unused AIP
entries.
33.2.6 – Examples
Example 1
This example uses the name of a known database table to display
details for this single logical area.
$ RMU/SHOW AIP SQL$DATABASE JOBS
Logical area name JOBS
Type: TABLE
Logical area 85 in mixed physical area 7
Physical area name JOBS
Record length 41
Thesholds are (0, 0, 0)
AIP page number: 151
ABM page number: 0
Snapshot Enabled TSN: 64
Example 2
The wildcard string "*EMPLOYEE* matches both indices and table
logical areas, so here we use /TYPE to limit the display to just
table logical areas. The table EMPLOYEES in the MF_PERSONNEL
database is partitioned across three storage areas and hence
there exists three logical areas.
$ RMU/SHOW AIP SQL$DATABASE *EMPLOYEE*/TYPE=TABLE
Logical area name EMPLOYEES
Type: TABLE
Logical area 80 in mixed physical area 3
Physical area name EMPIDS_LOW
Record length 126
Thesholds are (0, 0, 0)
AIP page number: 150
ABM page number: 0
Snapshot Enabled TSN: 4800
Logical area name EMPLOYEES
Type: TABLE
Logical area 81 in mixed physical area 4
Physical area name EMPIDS_MID
Record length 126
Thesholds are (0, 0, 0)
AIP page number: 151
ABM page number: 0
Snapshot Enabled TSN: 1504
Logical area name EMPLOYEES
Type: TABLE
Logical area 82 in mixed physical area 5
Physical area name EMPIDS_OVER
Record length 126
Thesholds are (0, 0, 0)
AIP page number: 151
ABM page number: 0
Snapshot Enabled TSN: 1504
Example 3
This example shows the REBUILD_SPAMS option used to locate
logical areas that require SPAM rebuilds. This may occur because
the stored row length changed size or THRESHOLDS were modified
for the index or storage map.
$ RMU/SHOW AIP/OPTION=REBUILD_SPAMS
_Root: SQL$DATABASE
_Logical area name:
Logical area name ACCOUNT_AUDIT
Type: TABLE
Logical area 86 in uniform physical area 1
Physical area name RDB$SYSTEM
Record length 12
Thesholds are (10, 100, 100)
Flags:
SPAM pages should be rebuilt
AIP page number: 151
ABM page number: 1004
Snapshot Enabled TSN: 5824
Logical area name DEPARTMENTS_INDEX
Type: SORTED INDEX
Logical area 94 in uniform physical area 10
Physical area name DEPARTMENT_INFO
Record length 430
Thesholds are (30, 65, 72)
Flags:
SPAM pages should be rebuilt
AIP page number: 151
ABM page number: 2
Snapshot Enabled TSN: 7585
Example 4
The /BRIEF qualifier specifies that a condensed tabular output
format be used. The /PAREA qualifier is used here to specify that
only logical areas stored in physical areas 4 and 5 are to be
displayed.
$ RMU /SHOW AIP /BRIEF MF_PERSONNEL /PAREA=(4,5)
*------------------------------------------------------------------------------
* Logical Area Name LArea PArea Len Type
*------------------------------------------------------------------------------
RDB$SYSTEM_RECORD 60 4 215 SYSTEM RECORD
RDB$SYSTEM_RECORD 61 5 215 SYSTEM RECORD
EMPLOYEES_HASH 79 4 215 HASH INDEX
EMPLOYEES 82 4 121 TABLE
JOB_HISTORY_HASH 85 4 215 HASH INDEX
JOB_HISTORY 88 4 42 TABLE
DEPARTMENTS_INDEX 89 5 430 SORTED INDEX
DEPARTMENTS 90 5 55 TABLE
The columns displayed include:
o Logical Area Name - Name of the logical area stored in the AIP
entry
o LArea - Logical area number stored in the AIP entry
o PArea - Physical area number stored in the AIP entry
o Len - Object length stored in the AIP entry
o Type - Object type stored in the AIP entry. The following
object types may be displayed:
o UNKNOWN - The logical area type is unknown or has not been
set
o TABLE - A data table type
o SORTED INDEX - A sorted index type
o HASH INDEX - A hashed index type
o SYSTEM RECORD - A system record type
o LARGE OBJECT - A large object (BLOB) type
33.3 – Audit
Displays the set of security auditing characteristics established
by the RMU Set command with Audit qualifier.
33.3.1 – Description
The RMU Show Audit command is the Oracle Rdb equivalent to the
DCL SHOW AUDIT command. Because Oracle Rdb security auditing uses
many OpenVMS system-level auditing mechanisms, certain auditing
characteristics such as /FAILURE_MODE can only be displayed
using the OpenVMS SHOW AUDIT command, which requires the OpenVMS
SECURITY privilege.
33.3.2 – Format
(B)0[mRMU/Show Audit root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/All x See description
/Daccess[=object-type[,...]] x See description
/Every x See description
/Flush x See description
/Identifiers x See description
/Output[=file-name] x /Output=SYS$OUTPUT
/Protection x See description
/Rmu x See description
/Type={Alarm|Audit} x Alarm and Audit
33.3.3 – Parameters
33.3.3.1 – root-file-spec
The root file specification of the database for which you want
auditing information to be displayed.
33.3.4 – Command Qualifiers
33.3.4.1 – All
All
Displays all available auditing information for the database,
including the following: whether security auditing and security
alarms are started or stopped; types of security events currently
enabled for alarms and audits; identifiers currently enabled
for auditing; and whether forced write operations are enabled or
disabled.
33.3.4.2 – Daccess
Daccess[=object-type[, . . . ]]
Indicates whether the general DACCESS audit event class is
currently enabled. Specifying one or more object types with the
Daccess qualifier displays the object types and their associated
privileges that are currently enabled for DACCESS event auditing.
If you specify more than one object type, enclose the list of
object types within parentheses.
The valid object types are:
DATABASE
TABLE
COLUMN
33.3.4.3 – Every
Every
Displays the current setting for the first or every DACCESS event
auditing for the database.
33.3.4.4 – Flush
Flush
Displays the current setting for forced write operations on audit
journal records for the database.
33.3.4.5 – Identifiers
Identifiers
Displays the user identification codes (UICs) of the users
currently enabled for DACCESS event auditing of specified
objects.
33.3.4.6 – Output
Output[=file-name]
Controls where the output of the command is sent. If you do not
enter the Output qualifier, or if you enter the Output qualifier
without a file specification, the output is sent to the current
process default output stream or device.
33.3.4.7 – Protection
Protection
Displays whether auditing is currently enabled for the PROTECTION
audit event class.
33.3.4.8 – Rmu
Rmu
Displays whether auditing is currently enabled for the RMU event
class.
33.3.4.9 – Type
Type=Alarm
Type=Audit
Displays information about security alarms or security auditing.
If you do not specify the Type qualifier, Oracle RMU displays
information about both security alarms and security auditing.
33.3.5 – Usage Notes
o To use the RMU Show Audit command for a database, you must
have the RMU$SECURITY privilege in the root file ACL for the
database or the OpenVMS SECURITY or BYPASS privilege.
o If you do not specify any qualifiers with the RMU Show Audit
command, the currently enabled alarm and audit security events
are displayed.
o Use the RMU Show Audit command to check which auditing
features are enabled whenever you plan to enable or disable
audit characteristics with a subsequent RMU Set Audit command.
o When the RMU Show Audit command is issued for a closed
database, the command executes without other users being able
to attach to the database.
33.3.6 – Examples
Example 1
The following command shows that alarms are enabled for the RMU
and PROTECTION audit classes for the mf_personnel database. Note
that the display shows that alarms are also enabled for the AUDIT
audit class. The AUDIT audit class is always enabled and cannot
be disabled.
$ RMU/SHOW AUDIT/ALL MF_PERSONNEL
Security auditing STOPPED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
ACCESS (disabled)
Security alarms STOPPED for:
PROTECTION (enabled)
RMU (enabled)
AUDIT (enabled)
ACCESS (disabled)
Audit flush is disabled
Audit every access
Enabled identifiers:
None
Example 2
In the following example, the first command enables and starts
alarms for the RMU audit class for the mf_personnel database.
Following the first command is the alarm that is displayed on
a security terminal when the first command is executed. The
second command displays the auditing characteristics that have
been enabled and started. The RMU Show Audit command with the
All qualifier causes the alarm at the end of the example to be
displayed on the security terminal. Note that security-enabled
terminals only receive alarms if alarms have been both enabled
and started.
$ RMU/SET AUDIT/TYPE=ALARM/ENABLE=RMU/START MF_PERSONNEL
%%%%%%%%%%% OPCOM 8-JUL-1996 09:41:01.19 %%%%%%%%%%%
Message from user RICK on MYNODE
Oracle Rdb Security alarm (SECURITY) on MYNODE, system id: 32327
Database name: DDV21:[RICK.SQL]MF_PERSONNEL.RDB;1
Auditable event: Auditing change
PID: 21212274
Event time: 8-JUL-1996 09:41:01.17
User name: RICK
RMU command: RMU/SET AUDIT/TYPE=ALARM/ENABLE=RMU/START MF_PERSONNEL
Sub status: RMU required privilege
Final status: %SYSTEM-S-NORMAL
RMU privilege used: RMU$SECURITY
$ RMU/SHOW AUDIT/ALL MF_PERSONNEL
Security auditing STOPPED for:
PROTECTION (disabled)
RMU (disabled)
AUDIT (enabled)
ACCESS (disabled)
Security alarms STARTED for:
PROTECTION (disabled)
RMU (enabled)
AUDIT (enabled)
ACCESS (disabled)
Audit flush is disabled
Audit every access
Enabled identifiers:
None
%%%%%%%%%%% OPCOM 8-JUL-1996 09:43:07.94 %%%%%%%%%%%
Message from user RICK on MYNODE
Oracle Rdb Security alarm (SECURITY) on MYNODE, system id: 32327
Database name: DDV21:[RICK.SQL]MF_PERSONNEL.RDB;1
Auditable event: Attempted RMU command
PID: 21212274
Event time: 8-JUL-1996 09:43:07.92
User name: RICK
RMU command: RMU/SHOW AUDIT/ALL MF_PERSONNEL
Access requested: RMU$SECURITY
Sub status: RMU required privilege
Final status: %SYSTEM-S-NORMAL
RMU privilege used: RMU$SECURITY
33.4 – Corrupt Pages
Indicates which pages, storage areas, or snapshot files are
corrupt or inconsistent by displaying the contents of the corrupt
page table (CPT). Corrupt pages are logged to the CPT, which is
maintained in the database root file.
33.4.1 – Format
(B)0[m RMU/Show Corrupt_Pages root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Options=({Normal|Debug|Full}) x /Options=(Normal)
/Output[=file-name] x /Output=SYS$OUTPUT
33.4.2 – Parameters
33.4.2.1 – root-file-spec
The root file specification of the database for which you want
the corrupt or inconsistent storage areas or snapshot files
logged to the CPT to be displayed.
33.4.3 – Command Qualifiers
33.4.3.1 – Options
Options=Normal
Options=Full
Options=Debug
Specifies the type of information you want displayed, as follows:
o Normal
Displays the active CPT entries and the corrupt or
inconsistent areas sorted by area and page.
o Full
Displays the same information as Normal plus the disks on
which the active CPT entries and the corrupt or inconsistent
areas or snapshot files are stored-sorted by disk, area, and
page.
o Debug
Provides a dump of the entire CPT and lists all the storage
areas.
Options=(Normal) is the default qualifier.
33.4.3.2 – Output
Output[=file-name]
Specifies the name of the file where output is sent. The default
is SYS$OUTPUT. The default output file extension is .lis, if you
specify only a file name.
33.4.4 – Usage Notes
o To use the RMU Show Corrupt_Pages command for a database, you
must have the RMU$BACKUP, RMU$RESTORE, or RMU$VERIFY privilege
in the root file access control list (ACL) for the database or
the OpenVMS SYSPRV or OpenVMS BYPASS privilege.
o You can repair and remove a corrupt snapshot file from
the CPT by issuing the RMU Repair command with the
Initialize=(Snapshots) qualifier. Using the Repair command
in this case is faster than performing a restore operation.
See Repair for details.
33.4.5 – Examples
Example 1
The following example shows the output from the RMU Show Corrupt_
Pages command when page 1 in area 3 is marked as corrupt:
$ RMU/SHOW CORRUPT_PAGES MF_PERSONNEL
*-------------------------------------------------------------------
* Oracle Rdb V7.0-00 8-JUL-1996 13:46:20.77
*
* Dump of Corrupt Page Table
* Database: USER1:[DB]MF_PERSONNEL.RDB;1
*
*--------------------------------------------------------------------
Entries for storage area EMPIDS_MID
-----------------------------------
Page 1
- AIJ recovery sequence number is -1
- Area ID number is 3
- Consistency transaction sequence number is 0:0
- State of page is: corrupt
*-------------------------------------------------------------------
* Oracle Rdb V7.0-00 8-JUL-1996 13:46:21.17
*
* Dump of Storage Area State Information
* Database: USER1:[DB]MF_PERSONNEL.RDB;1
*
*--------------------------------------------------------------------
All storage areas are consistent.
33.5 – Locks
Displays current information about the OpenVMS locks database on
your node. It provides information concerning lock activity and
contention for all active databases.
33.5.1 – Description
In a clustered environment, the RMU Show Locks command displays
detailed lock information for your current node and may display
information about known remote locks.
The RMU Show Locks command displays information about process
locks for all active databases on a specific node. A process
requesting a lock can have one of three states: owning, blocking,
or waiting. A process is considered to be owning when the lock
request is granted. A process is considered to be blocking when
the lock request is granted and its mode is incompatible with
other waiting locks. A process is considered to be waiting when
it is prevented from being granted a lock due to the presence
of other granted locks whose modes are incompatible with the
process' requested mode.
Using the RMU/SHOW LOCKS command can be difficult on systems
with multiple open databases due to the amount of output and
difficulty in determining what database a particular lock
references. The RMU/SHOW LOCKS command, when supplied with a
root file specification, can be used to additionally filter
lock displays to a specific database. Note that in some cases
the RMU/SHOW LOCKS command may be unable to filter locks prior
to display. And when using the database "LOCK PARTITIONING IS
ENABLED" feature for a database, the RMU/SHOW LOCKS command with
a root file specification will be unable to associate area, page,
and record locks with the specified database because the database
lock is not the lock tree root for these lock types.
The values for the Mode qualifier: Blocking and Waiting, can be
combined with the Process and Lock qualifiers to indicate which
of the following types of information is displayed:
o If the Blocking option is specified, information is displayed
about processes whose locks are blocking other processes'
locks.
o If the Waiting option is specified, information is displayed
about processes whose locks are waiting for other processes'
locks.
o If the Process qualifier is specified, information is
displayed for a specified list of processes.
o If the Lock qualifier is specified, information is displayed
for a specified list of locks. When no qualifiers are
specified, a list of all active locks in the OpenVMS locks
database is displayed.
Use the qualifiers individually or in combination to display the
required output. See Lock Qualifier Combinations for all possible
qualifier combinations and the types of output they produce.
If you do not specify any qualifiers, a complete list of locks
is displayed. The volume of information from this report can
be quite large. Therefore, you should use the Output qualifier
to direct output to a file, instead of allowing the output
to display to SYS$OUTPUT. Each output contains a heading that
indicates what qualifiers, if any, were used to generate the
output.
Table 16 Lock Qualifier Combinations
Mode Option
Object Argument Argument Output
Process Locks for the specified
processes
Process Blocking Processes blocking the
specified processes
Process Waiting Processes waiting for the
specified processes
Process All Process locks for the
specified processes
Process Full Special process locks for
the specified processes
Process Blocking, Processes blocking and
Waiting waiting for the specified
processes
Process Blocking Full Special process locks
blocking the specified
processes
Process Waiting Full Special process locks
waiting for the specified
processes
Process Blocking, Full Special process locks
Waiting blocking and waiting for
the specified processes
Process All, Full Process and special
process locks for the
specified processes
Lock Locks for the specified
locks
Lock Blocking Processes blocking the
specified locks
Lock Waiting Processes waiting for the
specified locks
Lock Full Special process locks for
the specified locks
Lock Blocking Full Special process locks
blocking the specified
locks
Lock Waiting Full Special process locks
waiting for the specified
locks
Lock Blocking, Processes blocking and
Waiting waiting for the specified
locks
Lock Blocking, Full Special process locks
Waiting blocking and waiting for
the specified locks
Blocking Lock requests that are
blocked
Waiting Lock requests that are
waiting
Blocking, Lock requests that are
Waiting blocking and waiting
Process Locks for specified
Lock processes and locks
Process Blocking Processes blocking the
Lock specified processes and
locks
Process Waiting Processes waiting for the
Lock specified processes and
locks
Process Blocking, Processes blocking and
Lock Waiting waiting for the specified
processes and locks
Process Blocking Full Special process locks
Lock blocking the specified
processes and locks
Process Waiting Full Special process locks
Lock waiting for the specified
processes and locks
Process All Process locks for the
Lock specified processes and
locks
Process Full Special process locks for
Lock the specified processes
and locks
Process Blocking Full Special process locks
Lock blocking the specified
processes and locks
Process All, Full Process and special
Lock process locks for the
specified processes and
locks
You can display only those processes that you have privilege to
access. Furthermore, certain special database processes are not
displayed, unless you specifically indicate that all processes
are to be displayed. The report heading indicates what qualifiers
were used to generate the output.
33.5.2 – Format
(B)0[m RMU/Show Locks [root-file-spec]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Lock = lock-list x None
/Mode = (mode-list) x None
/Options = (option-list) x See description
/Output[=file-name] x /Output=SYS$OUTPUT
/Process = process-list x None
/Resource-type=resource-type-list x None
33.5.3 – Parameters
33.5.3.1 – root-file-spec
The root file specification of the database for which you want to
filter lock displays. Optional parameter.
33.5.4 – Command Qualifiers
33.5.4.1 – Lock
Lock=lock-list
Displays information for each of the specified locks. When
combined with the Mode=Blocking qualifier, the Lock qualifier
displays information about processes whose locks are blocking the
specified locks. When combined with the Mode=Waiting qualifier,
the Lock qualifier displays information about processes whose
lock requests are waiting for the specified locks.
One or more locks can be specified; if more than one lock is
specified, they must be enclosed in parentheses and separated
by commas. The lock identifier is an 8-digit hexadecimal number,
and must be local to the node on which the RMU Show Locks command
is issued. To see the lock identifier upon which a process is
waiting, you can do either of the following:
o Invoke the character cell Performance Monitor "Stall Messages"
display.
o Invoke the Performance Monitor from your PC and select
Displays
33.5.4.2 – Mode
Mode=(mode-list)
Indicates the lock mode to be displayed. If you specify more than
one option in the mode-list, you must separate the options with
a comma, and enclose the mode-list in parentheses. The following
lock mode options are available:
o Blocking
Displays the set of processes whose locks are blocking the
lock requests of other processes. A process is considered
to be waiting when it has requested a lock mode that is
incompatible with existing granted lock modes; in this case,
the requestor is the waiting process and the grantors are the
blocking processes.
The first line of output identifies a process that is waiting
for a lock request to be granted. All subsequent lines of
output identify those processes that are preventing the
lock request from being granted. When multiple processes
are waiting for the same lock resource, multiple sets of
process-specific information, one for each waiting process,
are displayed.
o Culprit
Displays the set of locks for processes that are blocking
other processes but are themselves not locked. The output
represents the processes that are the source of database
stalls and performance degradation.
o Waiting
Displays the set of processes whose lock requests are waiting
due to incompatible granted locks for other processes. A
process is considered to be blocking others when it has been
granted a lock mode that is incompatible with requested lock
modes; in this case, the "Blocker" is the blocking process and
the "Waiting" are the waiting processes.
A requesting process can appear to be waiting for other
lock requestors. This condition occurs when there are many
processes waiting on the same lock resource. Depending upon
the sequence of processes in the wait queue, certain waiting
processes appear to be blocking other waiting processes
because, eventually, they will be granted the lock first.
The first line of output identifies a process that has been
granted a lock on a resource. All subsequent lines of output
identify those processes that are waiting for lock requests on
the same resource to be granted. When multiple processes are
blocking the same lock resource, multiple sets of process-
specific information, one for each blocking process, are
displayed.
33.5.4.3 – Options
Options=(option-list)
Indicates the type of information and the level of detail the
output will include. If you do not specify the Options qualifier,
the default output is displayed. If you specify more than one
type of output for the Options qualifier, you must separate
the options with a comma, and enclose the options list within
parentheses. The following options are available:
o All
Used when you want the complete list of process locks; by
default, lock information for only the specified process is
displayed. When you specify the All option, information is
displayed for all other processes that have a need to know
the lock held by the specific process. This method is an easy
way to display all of a process' locks and to see what other
processes are also using the same resource.
If the Mode qualifier is specified, the Options=(All)
qualifier is ignored.
o Full
Indicates that special database processes are to be displayed.
Some special database processes, such as monitors, perform
work on behalf of a database. These database processes
frequently request locks that by design conflict with other
processes' locks; the granting of these locks indicates an
important database event.
By default, these special database processes are not displayed
because they increase the size of the output.
33.5.4.4 – Output
Output[=file-name]
Specifies the name of the file where output is sent. The default
is SYS$OUTPUT. The default output file extension is .lis, if you
specify only a file name.
33.5.4.5 – Process
Process=process-list
Displays information for each lock held or requested by the
specified processes when used by itself. When the Process
qualifier is combined with the Mode=Blocking qualifier,
information is displayed about processes whose locks are blocking
lock requests by the specified waiting processes.
NOTE
When the Process qualifier is specified without any Options
qualifier values, all locks for the processes are displayed,
including owning, blocking, and waiting locks.
One or more processes can be specified; if more than one process
is specified, they must be enclosed within parentheses and
separated by commas. The process identifier is an 8-digit
hexadecimal number, and must be local to the node on which the
RMU Show Locks command is issued. The process ID must include all
eight characters; the node identifier portion of the process ID
cannot be excluded. To get more information, use the Options=All
qualifier to display all users using processes' locks.
33.5.4.6 – Resource type
Resource_type=resource-type-list
Displays information for each lock held or requested by the
specified resource type. Only the specific resource types will
be displayed. This permits, for example, only PAGE or RECORD lock
types to be selected.
One or more resouce types can be specified; if more than one
type is specified, they must be enclosed within parentheses and
separated by commas.
The following keywords are allowed with the Resource_type
qualifier.
Table 17 RESOURCE_TYPE Keywords
Internal
Lock Type
Name Keyword(s)
ACCESS ACCESS
ACTIVE ACTIVE
AIJDB AIJDB
AIJFB AIJFB
AIJHWM AIJHWM, AIJ_HIGH_WATER_MARK
AIJLOGMSG AIJ_LOG_MESSAGE
AIJLOGSHIP AIJ_LOG_SHIPPING
AIJOPEN AIJ_OPEN
AIJSWITCH AIJ_SWITCH
AIJ AIJ
AIPQHD AIP
ALS ALS_ACTIVATION
BCKAIJ AIJ_BACKUP, BCKAIJ
BCKAIJ_SPD AIJ_BACKUP_SUSPEND
BUGCHK BUGCHECK
CHAN CHAN, FILE_CHANNEL
CLIENT CLIENT
CLOSE CLOSE
CLTSEQ CLTSEQ
CPT CORRUPT_PAGE_TABLE, CPT
DASHBOARD DASHBOARD_NOTIFY
DBK_SCOPE DBKEY_SCOPE
DBR DBR_SERIALIZATION
DB DATABASE
FIB FAST_INCREMENTAL_BACKUP, FIB
FILID FILID
FRZ FREEZE
GBL_CKPT GLOBAL_CHECKPOINT
GBPT_SLOT GLOBAL_BPT_SLOT
KROOT KROOT
LAREA LAREA, LOGICAL_AREA
LOGFIL LOGFIL
MEMBIT MEMBIT
MONID MONID, MONITOR_ID
MONITOR MONITOR
NOWAIT NOWAIT
PLN DBKEY, RECORD, PLN
PNO PAGE, PNO
QUIET QUIET
RCACHE RCACHE
RCSREQUEST RCS_REQUEST
RCSWAITRQST RCS_WAIT_REQUEST
REL_AREAS RELEASE_AREAS
REL_GRIC_ RELEASE_GRIC_REQUEST
REQST
RMUCLIENT RMU_CLIENT
ROOT_AREA DUMMY_ROOT_AREA
RO_L1 L1_SNAP_TRUNCATION
RTUPB RTUPB
RUJBLK RUJBLK
RW_L2 L2_SNAP_TRUNCATION
SAC SNAP_AREA_CURSOR
SEQBLK SEQBLK
STAREA STORAGE_AREA, PAREA
STATRQST STATISTICS_REQUEST
TRM TERMINATION
TSNBLK TSNBLK
UTILITY UTILITY
The RESOURCE_TYPE qualifier is incompatible with the MODE, LIMIT,
LOCK and PROCESS qualifiers.
33.5.5 – Usage Notes
o To use the RMU Show Locks command for a database, you must
have the OpenVMS WORLD privilege.
o When you specify a list of processes or lock identifiers, make
sure the processes or locks are local to the node on which the
RMU Show Locks command is issued.
o To display the complete list of locks in the OpenVMS locks
database, do not specify the Mode=Blocking or Waiting
qualifier. The volume of information from this report can
be quite large.
o If you have entered an Oracle RMU command and there are no
locks on your node, you receive the following message:
%RMU-I-NOLOCKSOUT, No locks on this node with the specified
qualifiers.
o When you use the RMU Show Locks command to display locks,
the "requested" and "granted" modes of the given lock are
displayed. The definitions for the two fields follow:
- Requested
This is the mode for which the process has requested
the lock. Valid modes are NL, CR, CW, PR, PW, and EX.
This mode is not guaranteed to be granted; some locks
are intentionally held in conflicting modes forever (for
example, the "termination" lock).
- Granted
This is the mode that the process was last granted for
the lock. Valid modes are NL, CR, CW, PR, PW, and EX.
Furthermore, if the lock has never been previously granted,
the lock mode is displayed as NL mode.
Lock Mode Compatibility shows the compatibility of requested
and granted lock modes.
Table 18 Lock Mode Compatibility
Mode of Currently Granted Locks
Mode of
Requested
Lock NL CR CW PR PW EX
NL Yes Yes Yes Yes Yes Yes
CR Yes Yes Yes Yes Yes No
CW Yes Yes Yes No No No
PR Yes Yes No Yes No No
PW Yes Yes No No No No
EX Yes No No No No No
__________________________________________________________________
Key to Lock Modes
NL-Null Lock
CR-Concurrent Read
CW-Concurrent Write
PR-Protected Read
PW-Protected Write
EX-Exclusive Lock
Yes-Locks compatible
No-Locks not compatible
o If the "requested" and "granted" lock modes differ, then the
lock requested is currently blocked on either the "wait" or
"conversion" queue. If the modes are the same, then the lock
has been granted.
o The OpenVMS distributed lock manager does not always
update the requested lock mode. This means that potentially
conflicting information can be displayed by the RMU Show Locks
utility.
o The requested lock mode is updated only under the following
situations:
- The lock request is for a remote resource.
- The lock request is a Nowait request.
- The lock request could not be granted due to a lock
conflict (that is, it was canceled by the application or
aborted due to lock timeout or deadlock).
- The lock request is the first for the resource.
o Consider the following RMU Show Locks output:
---------------------------------------------------------------------
Resource Name: page 533
Granted Lock Count: 1, Parent Lock ID: 01000B6C, Lock Access Mode:
Executive,
Resource Type:
Global, Lock Value Block: 03000000 00000000 00000000 00000002
-Master Node Info- --Lock Mode Information-- -Remote Node Info-
ProcessID Lock ID SystemID Requested Granted Queue Lock ID SystemID
2040021E 0400136A 00010002 EX CR GRANT 0400136A 00010002
------------------------------------------------------------------------
In this example, it is ordinarily difficult to explain how
such a combination of lock modes could occur. Note that the
CR (concurrent read) mode is on the Grant queue (not the
Conversion queue).
Knowledge of the operating environment is necessary to know
that there was only one node on this system. It turns out that
two lock requests actually occurred to generate this output,
in the opposite order of what appears to have occurred.
The first lock request was for EX (exclusive), which was
immediately granted. Thus, the Requested and Granted modes
were updated according to situation 4. Then, the lock was
demoted from EX to CR mode, which was also immediately
granted. However, the Requested field was not updated because
none of the four preceding rules was true, so the Requested
mode was never updated to reflect the CR lock request.
o
33.5.6 – Examples
Example 1
The following command will output all the locks held by process
ID 44A047C9. The report text will show the resource on which
the lock is held, ID information, and lock status (Requested and
Granted).
$ RMU/SHOW LOCKS/PROCESS=44A047C9
33.6 – Logical Names
Displays logical names known by various components of Oracle Rdb.
33.6.1 – Description
The RMU Show Logical_Names command displays the definitions of
logical names known by various components of Oracle Rdb. You
can specify all logical names or just one. The output format is
similar to that of the DCL SHOW LOGICALS command.
33.6.2 – Format
(B)0[m RMU/Show Logical_Names [logical-name]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Output=file-name x SYS$OUTPUT
/Undefined x None
33.6.3 – Parameters
33.6.3.1 – logical-name
Use this option to display the definition of one logical name. If
you omit the logical name, the definitions of all logical names
known to Oracle Rdb are displayed.
33.6.4 – Command Qualifiers
33.6.4.1 – Output
Output=file-name
Specifies the name of the file where output is to be sent. The
default is SYS$OUTPUT. The default output file type is .lis, if
you specify a file name.
33.6.4.2 – Undefined
Use the Undefined qualifier to display a list of both defined and
undefined logicals.
33.6.5 – Examples
Example 1
The following example displays defined logical names known to
Oracle Rdb.
$ rmu/sho log
"RDM$BIND_ABS_LOG_FILE" = "ABS_PID.OUT" (LNM$SYSTEM_TABLE)
"RDM$BIND_ALS_OUTPUT_FILE" = "ALS_PID.OUT" (LNM$SYSTEM_TABLE)
"RDM$BIND_DBR_LOG_FILE" = "DBR_PID.OUT" (LNM$SYSTEM_TABLE)
"RDM$BIND_HOT_OUTPUT_FILE" = "AIJSERVER_PID.OUT" (LNM$SYSTEM_TABLE)
"RDM$BIND_LCS_OUTPUT_FILE" = "LCS_PID.OUT" (LNM$SYSTEM_TABLE)
"RDM$BIND_LRS_OUTPUT_FILE" = "LRS_PID.OUT" (LNM$SYSTEM_TABLE)
"RDM$BIND_RCS_LOG_FILE" = "RCS_PID.OUT" (LNM$SYSTEM_TABLE)
"RDM$BIND_RCS_LOG_HEADER" = "0" (LNM$SYSTEM_TABLE)
"RDM$BUGCHECK_DIR" = "DISK$RANDOM:[BUGCHECKS.RDBHR]" (LNM$SYSTEM_TABLE)
"RDM$MONITOR" = "SYS$SYSROOT:[SYSEXE]" (LNM$SYSTEM_TABLE)
Example 2
This example displays both defined and undefined logical names.
$ rmu/sho log /undefined ! Display them all
"RDMS$AUTO_READY" = Undefined
"RDM$BIND_ABS_GLOBAL_STATISTICS" = Undefined
"RDM$BIND_ABS_LOG_FILE" = "ABS_PID.OUT" (LNM$SYSTEM_TABLE)
"RDM$BIND_ABS_OVERWRITE_ALLOWED" = Undefined
"RDM$BIND_ABS_OVERWRITE_IMMEDIATE" = Undefined
"RDM$BIND_ABS_QUIET_POINT" = Undefined
"RDM$BIND_ABS_PRIORITY" = Undefined
"RDM$BIND_ABW_ENABLED" = Undefined
"RDM$BIND_AIJ_ARB_COUNT" = Undefined
.
.
.
33.7 – Optimizer Statistics
Displays the current values of the optimizer statistics for
tables and indexes as stored in the RDB$INDICES, RDB$RELATIONS,
and the RDB$WORKLOAD system table.
33.7.1 – Format
(B)0[mRMU/Show Optimizer_Statistics root-file
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Full x /Nofull
/[No]Indexes[=(index-list)] x /Index
/[No]Log[=file-name] x /Log
/Statistics[=(options)] x /Statistics
/[No]System_Relations x /Nosystem_Relations
/[No]Tables[=(table-list)] x /Tables
/[No]Threshold[=options] x /Nothreshold
33.7.2 – Parameters
33.7.2.1 – root-file-spec
root-file-spec
Specifies the database for which optimizer statistics are to be
displayed. The default file type is .rdb.
33.7.3 – Command Qualifiers
33.7.3.1 – Full
Full
Nofull
This qualifier can only be used if table, index, or index prefix
cardinality statistics are being displayed. If this qualifier is
specified, the following cardinality information is displayed:
o Actual cardinality
Displays the current table, index, or index prefix cardinality
value.
o Stored cardinality
Displays the table, index, or index prefix cardinality value
stored in the system relations.
o Difference between the stored and actual cardinality values
This value is negative if the stored cardinality is less than
the actual cardinality.
o Percentage cardinality difference from the actual value
This value is calculated by dividing the difference between
the stored and actual cardinality values by the actual
cardinality value. It is negative if the stored cardinality
is less than the actual cardinality.
The default value is Nofull.
33.7.3.2 – Indexes
Indexes[=(index-list)]
Noindex
Specifies the index or indexes for which statistics are to be
displayed. If you do not specify an index-list, statistics for
all indexes defined for the tables specified with the Tables
qualifier are displayed. If you specify an index-list, statistics
are displayed only for the named indexes. If you specify the
Noindex qualifier, statistics are not displayed for any indexes.
The default is the Indexes qualifier without an index-list.
33.7.3.3 – Log
Log
Nolog
Log=file-name
Specifies whether the display of statistics are to be logged.
Specify the Log qualifier to have the information displayed
to SYS$OUTPUT. Specify the Log=file-spec qualifier to have the
information written to a file. The Nolog qualifier is valid
syntax, but is ignored by Oracle RMU. The default is the Log
qualifier.
33.7.3.4 – Statistics
Statistics
Statistics[=(options)]
Specifies the type of statistics you want to display for the
items specified with the Tables, System_Relations, and Indexes
qualifiers. If you specify the Statistics qualifier without
an options list, all statistics are displayed for the items
specified.
If you specify the Statistics qualifier with an options list,
Oracle RMU displays the types of statistics described in the
following list. If you specify more than one option, separate the
options with commas and enclose the options within parentheses.
The Statistics qualifier options are:
o Cardinality
Displays the table cardinality for the tables specified with
the Tables and System_Relations qualifiers and the index and
index prefix cardinalities for the indexes specified with the
Indexes qualifier.
o Workload
Displays the Column Group, Duplicity Factor, and Null Factor
workload statistics for the tables specified with the Tables
and System_Relations qualifiers.
o Storage
Displays the following statistics:
- Table Row Clustering Factor for the tables specified with
the Tables qualifier
- Index Key Clustering Factor, the Index Data Clustering
Factor, and the Average Index Depth for the indexes
specified with the Indexes qualifier.
33.7.3.5 – System Relations
System_Relations
Nosystem_Relations
The System_Relations qualifier specifies that optimizer
statistics are to be displayed for system tables (relations)
and their associated indexes.
If you do not specify the System_Relations qualifier, or if you
specify the Nosystem_Relations qualifier, optimizer statistics
are not displayed for system tables or their associated indexes.
Specify the Noindex qualifier if you do not want statistics
displayed for indexes defined on the system tables.
The default is the Nosystem_Relations qualifier.
33.7.3.6 – Tables
Tables
Tables=(table-list)
Notables
Specifies the table or tables for which optimizer statistics
are to be displayed. If you specify a table-list, optimizer
statistics for those tables and their associated indexes are
displayed.
If you do not specify the Tables qualifier, or if you specify
the Tables qualifier but do not provide a table-list, optimizer
statistics for all tables and their associated indexes in the
database are displayed.
If you specify the Notables qualifier, optimizer statistics for
tables are not displayed.
Specify the Noindex qualifier if you do not want statistics
displayed for indexes defined on the specified tables.
The Tables qualifier is the default.
33.7.3.7 – Threshold
Threshold=options
Nothreshold
The Threshold qualifier can only be used in conjunction with
the Full qualifier. If this qualifier is used, an additional
Threshold column is added to the display. You can specify the
following options with the Threshold qualifier:
o Percent=n
The value for Percent=n can be an integer value from 0 to 99.
The default value for n is 0. If Percent=n is not specified
or if a percent value of 0 is specified, any percentage
difference from the actual cardinality value is flagged as
"*over*" in the output column. If a percent value of 1 to
99 is specified, any percentage difference from the actual
cardinality value that is greater than the percent value
specified is flagged as "*over*" in the output column. In the
report, the Threshold column displays those cardinality values
in which the percent difference exceeds the specified value.
If the threshold is not exceeded, the column is blank. If the
threshold is exceeded, the column shows the string "*over*".
o Log={All|Over_Threshold}
If Log is not specified or if Log=All is specified, all
cardinality values are displayed. If Log=Over_Threshold is
specified, only cardinality values that exceed the threshold
percentage are flagged as "*over*" in the output column.
33.7.4 – Usage Notes
o To use the RMU Show Optimizer_Statistics command for a
database, you must have the RMU$ANALYZE or RMU$SHOW privilege
in the root file access control list (ACL) for the database or
the OpenVMS SYSPRV or BYPASS privilege.
o Cardinality statistics are automatically maintained by
Oracle Rdb. Physical storage and Workload statistics are only
collected when you issue an RMU Collect Optimizer_Statistics
command. To get information about the usage of Physical
storage and Workload statistics for a given query, define
the RDMS$DEBUG_FLAGS logical name to be "O". For example:
$ DEFINE RDMS$DEBUG_FLAGS "O"
When you execute a query, if workload and physical statistics
have been used in optimizing the query, you will see a line
such as the following in the command output:
~O: Workload and Physical statistics used
o Use the RMU Show Optimizer Statistics command with the
Statistics=Cardinality/Full/Threshold=n qualifier to identify
index prefix cardinality drift. This command identifies
indexes that need to be repaired. Use the RMU Collect
Optimizer_Statistics command to repair the stored index prefix
cardinality values.
33.7.5 – Examples
Example 1
The following command displays all optimizer statistics
previously collected for the EMPLOYEES table. See Collect_
Optimizer_Statistics for an example that demonstrates how to
collect optimizer statistics.
$ RMU/SHOW OPTIMIZER_STATISTICS MF_PERSONNEL.RDB /TABLE=(EMPLOYEES)
-------------------------------------------------------------------
Optimizer Statistics for table : EMPLOYEES
Cardinality : 100
Row clustering factor : 0.5100000
Workload Column group : EMPLOYEE_ID
Duplicity factor : 1.0000000
Null factor : 0.0000000
First created time : 3-JUL-1996 10:37:36.43
Last collected time : 3-JUL-1996 10:46:10.73
Workload Column group : LAST_NAME, FIRST_NAME, MIDDLE_INITIAL,
ADDRESS_DATA_1, ADDRESS_DATA_2, CITY, STATE, POSTAL_CODE, SEX,
BIRTHDAY, STATUS_CODE
Duplicity factor : 1.5625000
Null factor : 0.3600000
First created time : 3-JUL-1996 10:37:36.43
Last collected time : 3-JUL-1996 10:46:10.74
Index name : EMP_LAST_NAME
Index Cardinality : 83
Average Depth : 2.0000000
Key clustering factor : 0.0481928
Data clustering factor : 1.1686747
Segment Column Prefix cardinality
LAST_NAME 0
Index name : EMP_EMPLOYEE_ID
Index Cardinality : 0
Average Depth : 2.0000000
Key clustering factor : 0.0100000
Data clustering factor : 0.9500000
Segment Column Prefix cardinality
EMPLOYEE_ID 0
Index name : EMPLOYEES_HASH
Index Cardinality : 0
Key clustering factor : 1.0000000
Data clustering factor : 1.0000000
Example 2
The following command displays optimizer statistics for all the
tables defined in the database. Because the Noindex qualifier
is specified, no index statistics are displayed. Because the Log
qualifier is specified with a file specification, the values for
the optimizer statistics are written to the specified file.
$ RMU/SHOW OPTIMIZER_STATISTICS mf_personnel.rdb -
_$ /NOINDEX/LOG=NOINDEX-STAT.LOG
Example 3
The following example displays the output of a command when
the Full and Threshold qualifiers are used with the Cardinality
option. In the example, table XXX has three indexes. Index XXX_
IDX_FULL has index prefix cardinality collection enabled full
and the report shows no cardinality drift for this index. Index
XXX_IDX_APPROX has index prefix cardinality collection enabled,
and cardinality drift is evident. For the first segment of the
index (column C1), the stored cardinality is 20% lower than the
actual cardinality. Since the command specifies a threshold of
5%, the line is marked "*over*" in the Thresh column. There
is also cardinality drift for the second segment of the index
(column C2), index prefix (C1, C2). The third index XXX_IDX_
NONE has index prefix cardinality collection disabled. This is
indicated in the report rather than showing the index segments.
If the report were lengthy, you could write it to a disk file
and then locate the problem indexes by searching for the string
"*over*".
$ RMU/SHOW OPTIMIZER/STAT=CARD/FULL/THRESH=(percent=5,log=all) sample.rdb
Optimizer Statistics for table : XXX
(Cardinality: Diff=Stored-Actual, Percent=Diff/Actual, Thresh=Percent exceeded)
Table cardinality
Actual Stored Diff Percent Thresh
109586 109586 0 0 %
Index name : XXX_IDX_FULL
(Cardinality: Diff=Stored-Actual, Percent=Diff/Actual, Thresh=Percent exceeded)
Index cardinality
Actual Stored Diff Percent Thresh
109586 109586 0 0 %
Prefix cardinality
Actual Stored Diff Percent Thresh
Segment Column : C1
1425 1425 0 0 %
Segment Column : C2
31797 31797 0 0 %
Segment Column : C3
0 0 0 0 %
Index name : XXX_IDX_APPROX
(Cardinality: Diff=Stored-Actual, Percent=Diff/Actual, Thresh=Percent exceeded)
Index cardinality
Actual Stored Diff Percent Thresh
109586 109586 0 0 %
Prefix cardinality
Actual Stored Diff Percent Thresh
Segment Column : C1
1425 1140 -285 -20 % *over*
Segment Column : C2
31797 30526 -1271 -4 %
Segment Column : C3
0 0 0 0 %
Index name : XXX_IDX_NONE
(Cardinality: Diff=Stored-Actual, Percent=Diff/Actual, Thresh=Percent exceeded)
Index cardinality
Actual Stored Diff Percent Thresh
109586 109586 0 0 %
***Prefix cardinality collection is disabled***
33.8 – Privilege
Allows you to display the root file access control list (ACL) for
a database.
33.8.1 – Format
(B)0[mRMU/Show Privilege root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
[No]Expand_All x /Noexpand_All
[No]Header x /Header
33.8.2 – Parameters
33.8.2.1 – root-file-spec
The root file specification for the database whose root file
ACL you are displaying. By default, a file extension of .rdb is
assumed.
33.8.3 – Command Qualifiers
33.8.3.1 – Expand All
Noexpand_All
Specifies that if a user's access mask was defined with the
RMU$ALL keyword on the RMU Set Privilege command, each of the
RMU privileges represented by the RMU$ALL keyword is displayed.
The Noexpand_All qualifier specifies that if a user's access mask
was defined with the RMU$ALL keyword on the RMU Set Privilege
command, only the keyword is displayed; the RMU privileges
represented by the keyword are not displayed.
The Noexpand_All qualifier is the default.
33.8.3.2 – Header
Noheader
Specifies that header information is to be displayed. The
Noheader qualifier suppresses output of header information.
The Header qualifier is the default.
33.8.4 – Usage Notes
o To use the RMU Show Privilege command for a database, you must
have the RMU$SECURITY privilege in the root file ACL for the
database or the OpenVMS SECURITY or BYPASS privilege.
o Although you can use the DCL SHOW ACL command to display the
root file ACL for a database, the DCL SHOW ACL command does
not display the names of the Oracle RMU privileges granted to
users.
33.8.5 – Examples
Example 1
In the following example, the RMU Show Privilege command displays
the root file ACL for the mf_personnel database:
$ RMU/SHOW PRIVILEGE MF_PERSONNEL.RDB
Object type: file, Object name: SQL_USER:[USER1]MF_PERSONNEL.RDB;1,
on 12-FEB-1996 10:48:23.04
(IDENTIFIER=[SQL,USER1],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
(IDENTIFIER=[SQL,USER2],ACCESS=READ+WRITE+RMU$ALTER+RMU$ANALYZE+
RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+RMU$MOVE+RMU$OPEN+
RMU$RESTORE+RMU$SHOW+RMU$UNLOAD+RMU$VERIFY)
(IDENTIFIER=[SQL,USER3],ACCESS=READ+WRITE+CONTROL+RMU$SECURITY)
Example 2
The following examples demonstrate the difference in output when
you use the Header and Noheader qualifiers:
$ RMU/SHOW PRIV MF_PERSONNEL.RDB/HEADER
Object type: file, Object name: RDBVMS_USER:[DB]MF_PERSONNEL.RDB;1,
on 17-SEP-1998 13:47:20.21
(IDENTIFIER=[RDB,STONE],ACCESS=RMU$ALL)
$ RMU/SHOW PRIVILEGE MF_PERSONNEL.RDB/NOHEADER
(IDENTIFIER=[RDB,STONE],ACCESS=RMU$ALL)
Example 3
The following examples demonstrate the difference in output when
you use the Expand and Noexpand qualifiers:
$ RMU/SET PRIVILEGE MF_PERSONNEL.RDB /ACL=(I=STONE,A=RMU$ALL)
$ RMU/SHOW PRIVILEGE MF_PERSONNEL.RDB /NOEXPAND/NOHEADER
(IDENTIFIER=[RDB,STONE],ACCESS=READ+WRITE+CONTROL+RMU$ALL)
$ RMU/SHOW PRIVILEGE MF_PERSONNEL.RDB /EXPAND/NOHEADER
(IDENTIFIER=[RDB,STONE],ACCESS=READ+WRITE+CONTROL+RMU$ALTER+
RMU$ANALYZE+RMU$BACKUP+RMU$CONVERT+RMU$COPY+RMU$DUMP+RMU$LOAD+
RMU$MOVE+RMU$OPEN+RMU$RESTORE+RMU$SECURITY+RMU$SHOW+RMU$UNLOAD+
RMU$VERIFY)
33.9 – Statistics
Opens the Performance Monitor to display, on a character-cell
terminal, the usage statistics for a database. See the Oracle
Rdb7 Guide to Database Performance and Tuning for tutorial
information on how to interpret the Performance Monitor displays.
33.9.1 – Description
The Performance Monitor dynamically samples activity statistics
on a database. You can display the statistics at your terminal
and can also write them to a formatted binary file.
The statistics show activity only from the node on which you
execute the command.
The Performance Monitor operates in one of three modes: online,
record, and replay. In online mode, you can display or record
current activity on a database. In record mode, you can record
statistics in a binary file. In replay mode, you can examine a
previously recorded binary statistics file.
If you use the Input qualifier, the Performance Monitor executes
in replay mode. In replay mode, this command generates an
interactive display from a previously recorded binary statistics
file.
If you do not use the Input qualifier, you must specify a
database file name. The Performance Monitor then executes in
online mode. In online mode, the command generates an interactive
display when you use the Interactive qualifier and can also
record statistics in a binary file.
The interactive display is made up of numerous output pages.
You control the interactive display by means of menus, arrow
keys, and the Return key to select options. You select an item
by pressing the arrow keys until the desired item is highlighted,
then press the Return key.
Display the Select Display options (by typing D) from the
Performance Monitor screen to view the available output pages.
Items in the Display menu followed by this set of characters:
[->, indicate that a submenu is displayed when you select this
item.
Once you have selected a display, there are a number of methods
you can use to navigate through the screens:
o To move to the next screen of information, do one of the
following:
- Press the right arrow (- > ) keyboard key.
- Press the Next Screen keyboard key.
o To move to the previous screen of information, do one of the
following:
- Press the left arrow (< - ) keyboard key.
- Press the Prev Screen keyboard key.
o To move forward n number of screens, press the plus (+)
keyboard key and enter the value n.
o To move backward n number of screens, press the minus (-)
keyboard key and enter the value n.
o To move directly from the first screen to the last screen, do
one of the following:
- Press the up arrow (^ ) keyboard key.
- Press the plus (+) keyboard key and enter the value 0.
o To move directly from the last screen to the first screen, do
one of the following:
- Press the down arrow (v ) keyboard key.
- Press the hyphen (-) keyboard key and enter the value 0.
o To quickly locate a screen in the current submenu group that
contains activity, press the space bar on your keyboard.
This feature works even when you are replaying a binary input
file. If there is no screen in the current subgroup that has
activity, the next screen is displayed (as though you had
used the Next Screen key). The Performance Monitor ignores
computational screens, such as Stall Messages, Monitor Log,
and so on, when searching for activity.
In interactive mode, enter an exclamation point to open the
Select Tool menu. This menu allows you to switch the database
for which you are displaying statistics, edit a file, invoke a
system command, and so on. (The ability to open a new database
is not available if you specify the Input or Output qualifier.)
In addition, it provides you the ability to locate a specific
statistics screen either by name (or portion thereof) or by a
summary-selection menu. Select the Goto screen or Goto screen
"by-name" options from the Select Tool menu to use these options.
In interactive mode, you can pause output scrolling on your
screen by pressing the P key. Resume output scrolling by pressing
the P key again.
An extensive online help facility for the character-cell
interface is available by doing the following from the
Performance Monitor screen:
1. Type H or PF2.
2. Select the type of help you want (keyboard, screen, or field).
3. Press the Return key.
If you select field level help, you must also do the following:
1. Highlight the field for which you want help information.
2. Press the Return key.
All screens regardless of format or display contents have a
standard format as follows:
o First line
Contains the node name, the utility name and version number,
and the current system date and time. The current system date
and time are updated at the specified set-rate interval.
o Second line
Contains the screen refresh rate, in seconds; the current
screen name; and the elapsed time since the last set-rate
command, which indicates how long the screen information has
been collected.
o Third line
Contains the current page number within the screen (screen X
of Y), the name of the current database, and the statistics
utility operation mode (online, record, or replay). Online
mode is the normal database activity displayed in real
time. Record mode indicates that the database activity being
displayed is being recorded to an external file specified by
the Output qualifier. Replay mode indicates that the database
activity is being displayed from the external file specified
by the Input qualifier.
You can display most statistics in either a histogram or a
columnar chart, although several display pages have special
formats. By default, the initial interactive display appears
in histogram mode; by using the Nohistogram qualifier, you can
direct Oracle RMU to display statistics in tabular numeric mode.
In addition, you can produce time-plot graphics for individual
statistical fields.
Use the Output qualifier to direct statistical output to a file.
The output is a formatted binary file and does not produce a
legible printed listing. To read the output, you must use the RMU
Show Statistics command with the Input qualifier.
The Nointeractive qualifier suppresses the interactive display.
Use this qualifier when you want to generate binary statistics
output but do not want an online display.
Database statistics are maintained in a global section on each
system on which Oracle Rdb is running. Statistics are reset to
zero when you close a database. Running the Performance Monitor
keeps the database open even when there are no users accessing
the database.
The Stall Messages display permits you to display multiple
screens of information. Access the Stall Messages display by
selecting Per-Process Information from the Select Display Menu;
then select the Stall Messages display from the secondary menu.
If you are displaying the last screen of Stall Messages
information and the number of stalled processes is reduced such
that the last screen is empty, you are automatically moved to the
newest last screen of information when you press the Next Screen
keyboard key (or the right arrow keyboard key).
You can also use the Alarm, Notify, and Screen qualifiers to
simplify monitoring stalled processes. See the description of
each of these qualifiers for more information.
33.9.2 – Format
(B)0[m RMU/Show Statistics [root-file-spec]
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Access_Log x None
/Alarm=interval x /Alarm=0
/[No]Broadcast x See description
/[No]Cluster=[(node-list)] x /Nocluster
/Configure=file-spec x None
/[No]Cycle=seconds x /Nocycle
/Dbkey_Log=file-spec x See description
/Deadlock_Log=file-spec x None
/[No]Histogram x /Histogram
/Hot_Standby_Log x None
/Input = file-name x See description
/[No]Interactive x See description
/Lock_Timeout_Log=file-spec x None
/[No]Log x See description
/[No]Logical_Area x /Logical_Area
/[No]Notify[=([No]All | operator-classes)] x /Nonotify
/[No]Opcom_Log=filename x /Noopcom_Log
(B)0[m /Options=keywords x /Options=Base
/Output=file-spec x See description
/[No]Prompt_Timeout=seconds x /Prompt_Timeout=60
/Reopen_Interval= minutes x None
/Reset x Statistics are no
/Screen = screen-name x See description
/Stall_Log = file-spec x Stall messages no
/Time = integer x /Time = 3
/Until = date-time x See description
33.9.3 – Parameters
33.9.3.1 – root-file-spec
The root file specification of the database on which you
want statistics. If you use the Input qualifier to supply a
prerecorded binary statistics file, you cannot specify a database
file name. If you do not use the Input qualifier, you must
specify a database file name.
33.9.4 – Command Qualifiers
33.9.4.1 – Access Log
Identifies the name of the log file where logical area accesses
are to be recorded.
33.9.4.2 – Alarm
Alarm=interval
Establishes an alarm interval (in seconds) for the Stall Messages
screen from the command line. This is useful when you plan to
submit the RMU Show Statistics command as a batch job.
Use this qualifier in conjunction with the Notify qualifier to
notify an operator or set of operators of stalled processes.
The default value is 0 seconds, which is equivalent to disabling
notification.
33.9.4.3 – Broadcast
Broadcast
Nobroadcast
Specifies whether or not to broadcast messages. The Broadcast
qualifier is the default, if broadcasting of certain messages
has been enabled with DCL SET BROADCAST. If broadcasting has
been disabled with the DCL SET BROADCAST=none command, broadcast
messages are not displayed, even if you specify the RMU Show
Statistics command with the Broadcast qualifier.
Specify the Nobroadcast qualifier if broadcasting has been
enabled with the DCL SET BROADCAST command but you do not
want broadcast messages displayed while you are running the
Performance Monitor.
33.9.4.4 – Cluster
Cluster=(node-list)
Nocluster
Specifies the list of remote nodes from which statistics
collection and presentation are to be performed. The collected
statistics are merged with the information for the current node
and displayed using the usual statistics screens.
The following list summarizes usage of the Cluster qualifier:
o If the Cluster qualifier is specified by itself, remote
statistics collection is performed on all cluster nodes on
which the database is currently open.
o If the Cluster=(node-list) qualifier is specified, remote
statistics collection is performed on the specified nodes
only, even if the database is not yet open on those nodes.
o If the Cluster qualifier is not specified, or the Nocluster
qualifier (the default) is specified, cluster statistics
collection is not performed. However, you can still enable
clusterwide statistics collection online using the Tools menu.
You can specify up to 95 different cluster nodes with the Cluster
qualifier. There is a maximum number of 95 cluster nodes because
Oracle Rdb supports only 96 nodes per database. The current node
is always included in the list of nodes from which statistics
collection is to be performed.
It is not necessary to have the RMU Show Statistics command
running on the specified remote nodes or to have the database
open on the remote nodes. These events are automatically handled
by the feature.
The following example shows the use of the Cluster qualifier to
initiate statistics collection and presentation from two remote
nodes:
$ RMU /SHOW STATISTICS /CLUSTER=(BONZAI, ALPHA4) MF_PERSONNEL
Remote nodes can also be added and removed online at run time.
Use the Cluster Statistics option located in the Tools menu.
The Tools menu is displayed by using the exclamation point (!)
on-screen menu option.
See the RMU Show Statistic DBA Handbook (available in MetaLink
if you have a service contract) for information about the Cluster
Statistics Collection and Presentation feature.
33.9.4.5 – Configure
Configure=file-spec
Specifies the name of a human-readable configuration file to be
processed by the RMU Show Statistics command. The configuration
file can be created using any editor, or it can be automatically
generated from the RMU Show Statistics command using the current
run-time configuration settings. The default configuration file
type is .cfg.
If you specify the Configure=file-spec qualifier, the
configuration file is processed by the RMU Show Statistics
command prior to opening the database or the binary input file.
If you do not specify this qualifier, all of the variables are
the defaults based on command-line qualifiers and logical names.
The configuration file is processed in two passes. The first
pass occurs before the database is opened and processes most
of the configuration file entries. The second pass occurs after
the database is opened and processes those variables that are
database-dependent, such as the CUSTOMER_LINE_n variable.
See the RMU Show Statistic DBA Handbook (available in MetaLink
if you have a service contract) for more information about
configuration files.
33.9.4.6 – Cycle
Cycle=seconds
Nocycle
Directs the Performance Monitor to continually cycle through the
set of screens associated with the currently selected menu item.
Each menu is displayed for the number of seconds specified.
When you specify the Cycle qualifier, you can change screen
modes or change submenus as desired; cycling through the menus
associated with your choice continues at whichever menu level is
currently selected.
The specified value for the Cycle qualifier must be greater
than or equal to the value specified for the Time qualifier.
In addition, if you manually change the refresh rate (using the
Set_rate onscreen menu option) to a value that is greater than
the value you specify with the Cycle qualifier, the cycling is
performed at the interval you specify for the Set_rate.
If you do not specify the Cycle qualifier, or if you do not
specify the number of seconds, no screen cycling is performed.
33.9.4.7 – Dbkey Log
Dbkey_Log=file-spec
Logs the records accessed during a given processing period by the
various attached processes. The file-spec is the name of the file
to which all accessed dbkeys are logged.
The header region of the dbkey log contains four lines. The first
line indicates that the RMU Show Statistic utility created the
log file. The second line identifies the database. The third
line identifies the date and time the dbkey log was created. The
fourth line is the column heading line.
The main body of the dbkey log contains six columns. The first
column contains the dbkey process ID and stream ID. The second
through sixth columns contain the most recently accessed dbkey
for a data page, snapshot page, SPAM page, AIP page, and ABM
page, respectively.
Only one message per newly accessed dbkey is recorded. However,
all dbkey values are displayed, even if some of the dbkeys did
not change.
The dbkey information is written at the current screen refresh
rate, determined by the Time qualifier or the Set_rate onscreen
menu option. Using a larger refresh rate minimizes the size of
the file but results in a large number of missed dbkey messages.
Using a smaller refresh rate produces a large log file, but
contains a much finer granularity of dbkey messages.
Note that you do not need to display the Dbkey Information screen
in order to record the dbkey messages to the dbkey log. The
dbkey log is maintained regardless of which screen, if any, is
displayed.
You can use the Dbkey_Log qualifier to construct a dbkey logging
server, as follows:
$ RMU/SHOW STATISTICS/NOHISTOGRAM/TIME=1 -
_$ /NOINTERACTIVE/DBKEY_LOG=DBKEY.LOG MF_PERSONNEL -
_$ /NOBROADCAST/UNTIL="15:15:00"
33.9.4.8 – Deadlock Log
Deadlock_Log=file-spec
Records the last deadlock for the processes. There is no method
to record each lock deadlock as it occurs.
The file-spec in the qualifier is the name of the file to which
you want all lock deadlock messages to be logged. The lock
deadlock messages are written in human-readable format similar
to the Lock Timeout History and Lock Deadlock History screens.
The header region of the lock deadlock log contains three lines:
o Line 1 indicates that the RMU Show Statistics utility created
the log file.
o Line 2 identifies the database.
o Line 3 identifies the date and time the log was created.
The main body of the stall log contains three columns:
o The first column contains the process ID and stream ID that
experienced the lock deadlock.
o The second column contains the time the deadlock occurred;
however, the date is not displayed.
o The third column contains the deadlock message describing the
affected resource. This message is similar to the originating
stall message.
For example:
2EA00B52:34 14:25:46.14 - waiting for page 5:751 (PR)
If any lock deadlocks are missed for a particular process
(usually because the recording interval is too large), the
number of missed lock deadlocks is displayed in brackets after
the message. For example:
2EA00B52:34 14:25:46.14 - waiting for page 5:751 (PR) [1 missed]
Only one message is logged for each deadlock.
The lock deadlock messages are written at the specified screen
refresh rate, determined by specifying the Time qualifier, or
online using the Set_rate on-screen menu option. Using a larger
refresh rate minimizes the size of the file, but results in
a large number of missed deadlock messages. Using a smaller
refresh rate produces a large log file, but contains a much finer
granularity of deadlock messages.
Using the Time=1 or Time=50 qualifier produces a reasonable log
while minimizing the impact on the system.
The affected LockID is not displayed, because this is meaningless
information after the lock deadlock has completed.
Use the Tools menu (displayed when you press the exclamation
point (!) key from any screen) to enable or disable the lock
timeout and lock deadlock logging facility while the RMU Show
Statistics utility is running. However, note that the lock
timeout log and lock deadlock log are not available during binary
file replay.
33.9.4.9 – Histogram
Nohistogram
Directs Oracle RMU to display the initial statistics screen in
the numbers display mode or the graph display mode. The Histogram
qualifier specifies the graph display mode. The Nohistogram
qualifier specifies the numbers display mode.
The Histogram qualifier is the default.
33.9.4.10 – Hot Standby Log
Specifies the name of the Hot Standby log file. The "Start hot
standby logging" option of the Tools menu (enter !) can be used
to specify the name of the Hot Standby log file at runtime.
33.9.4.11 – Input
Input=file-name
Specifies the prerecorded binary file from which you can read the
statistics. This file must have been created by an earlier RMU
Show Statistics session that specified the Output qualifier.
You cannot specify a database file name with the Input qualifier.
Also, you must not use the Until, Output, or Nointeractive
qualifiers with the Input qualifier. However, you can use the
Time qualifier to change the rate of the display. This will not
change the computed times as recorded in the original session.
For example, you can record a session at Time=60. This session
will gather statistics once per minute.
You can replay statistics gathered in a file by using the Input
and Time qualifiers. To replay a file:
o Use the Output qualifier to create a file of database
statistics.
o Use the Input and Time qualifiers to view the statistics
again at a rate that you determine. For example, the command
RMU/SHOW STATISTICS PERS.LOG/TIME=1, will replay the PERS.LOG
file and change the display once per second, thus replaying 10
hours of statistics in 10 minutes.
If you do not specify the Input qualifier, you must specify the
root-file-spec parameter.
33.9.4.12 – Interactive
Nointeractive
Displays the statistics dynamically to your terminal. The
Interactive qualifier is the default when you execute the
RMU Show Statistics command from a terminal. You can use the
Nointeractive qualifier with the Output qualifier to generate a
binary statistics file without generating a terminal display. The
Nointeractive qualifier is the default when you execute the RMU
Show Statistics command from a batch job.
In an interactive session, you can use either the menu interface
or the predefined control characters to select display options
(see the Performance Monitor online help for further information
about the predefined control characters).
Select menu options by using the up (^ ) and down (v ) arrow keys
followed by pressing the Return or Enter key. Cancel the menu by
pressing Ctrl/Z.
33.9.4.13 – Lock Timeout Log
Lock_Timeout_Log=file-spec
Records the last lock timeout message for the processes. There
is no method to record each lock timeout as it occurs. The lock
timeout messages are written in human-readable format.
The header region of the lock timeout log contains three lines:
o Line 1 indicates that the RMU Show Statistics utility created
the log file.
o Line 2 identifies the database.
o Line 3 identifies the date and time the log was created.
The main body of the stall log contains three columns:
o The first column contains the process ID and stream ID that
experienced the lock timeout.
o The second column contains the time the timeout occurred;
however, the date is not displayed.
o The third column contains the timeout message describing the
affected resource. This message is similar to the originating
stall message.
For example:
2EA00B52:34 14:25:46.14 - waiting for page 5:751 (PR)
If any lock timeouts are missed for a particular process (usually
because the recording interval is too large), the number of
missed lock timeouts is displayed in brackets after the message.
For example:
2EA00B52:34 14:25:46.14 - waiting for page 5:751 (PR) [1 missed]
Only one message is logged for each lock timeout.
The lock timeout messages are written at the specified screen
refresh rate, determined by specifying the Time qualifier, or
online using the Set_rate on-screen menu option. Using a larger
refresh rate minimizes the size of the file, but results in a
large number of missed lock timeout messages. Using a smaller
refresh rate produces a large log file, but contains a much finer
granularity of lock timeout messages.
Using the Time=1 or Time=50 qualifier appears to produce a
reasonable log while minimizing the impact on the system.
The affected LockID is not displayed because this is meaningless
information after the lock timeout has completed.
Note that you do not need to be displaying the Lock Timeout
History or Lock Deadlock History screens to record the stall
messages to the stall log. These logs are maintained regardless
of which screen, if any, is displayed.
Use the Tools menu (displayed when you press the exclamation
point (!) key from any screen) to enable or disable the lock
timeout and lock deadlock logging facility while the RMU Show
Statistics utility is running. However, note that the lock
timeout log and lock deadlock log are not available during binary
file replay.
33.9.4.14 – Log
Nolog
Logs the creation of a binary statistics file to your output
file. This binary statistics file is created only if you have
used the Output qualifier. If you use the Nolog qualifier, no
operations will be logged to your output file.
The default is the current setting of the DCL verify switch. See
HELP SET VERIFY in DCL HELP for more information on changing the
DCL verify switch.
If you use the Interactive qualifier, the Log qualifier is
ignored.
33.9.4.15 – Logical Area
Logical_Area
Nological_Area
Specifies that you want the RMU Show Statistics command to
acquire the needed amounts of virtual memory to display logical
area statistics information. The Logical_Area qualifier is the
default.
By default, the RMU Show Statistics command consumes
approximately 13,000 bytes of virtual memory per logical area.
(The number of logical areas is determined by the largest logical
area identifier - not by the actual number of areas.) This can
result in the RMU Show Statistics command consuming large amounts
of virtual memory, even if you do not want to review logical area
statistics information.
Use the NoLogical_Area qualifier to indicate that you do not want
to display logical area statistics information. When you specify
the NoLogical_Area qualifier, the virtual memory for logical area
statistics information presentation is not acquired.
When you specify the NoLogical_Area qualifier, do not also
specify the Nolog qualifier, as this causes logical area
statistics information to still be collected.
The "Logical Area" statistics are not written to the binary
output file. Conversely, the "Logical Area" statistics screens
are not available during binary input file replay.
There is no corresponding configuration variable. This qualifier
cannot be modified at run time. See the RMU Show Statistic DBA
Handbook (available in MetaLink if you have a service contract)
for more information about interpreting logical area screens.
33.9.4.16 – Notify
Notify
Notify=All
Notify=Noall
Notify=operator-classes
Nonotify
Notifies the specified system operator or operators when a
stall process exceeds the specified alarm interval by issuing
a broadcast message and ringing a bell at the terminal receiving
the message.
The valid operator classes are: CENTRAL, CLUSTER, DISKS, OPCOM,
SECURITY, and OPER1 through OPER12.
The various forms of the Notify qualifier have the following
effects:
o If you specify the Notify qualifier without the operator-
classes parameter, the CENTRAL and CLUSTER operators are
notified by default.
o If you specify the Nonotify or Notify=Noall qualifiers,
operator notification is disabled.
o If you specify the Notify=All qualifier, all operator classes
are enabled.
o If you specify the Notify=operator-classes qualifier, the
specified classes are enabled. (If you specify more than one
operator class, enclose the list in parentheses and separate
each class name with a comma.)
For example, issuing the RMU Show Statistics command with the
Notify=(OPER1, OPER2) qualifier sends a notification message
to system operator classes OPER1 and OPER2 if the Alarm
threshold is exceeded while monitoring the Stall Messages
screen.
o When the Notify=OPCOM qualifier is specified with the RMU
Show Statistics command along with the Alarm and Cluster
qualifiers, Oracle RMU generates an OPCOM message and delivers
it to the OPCOM class associated with the Notify qualifier.
This message alerts the operator to the fact that the process
has stalled for more than n seconds, where n is the value
assigned to the Alarm qualifier. The process that has stalled
may be on any node that is included in the node name list
assigned to the Cluster qualifier.
The specified system operator(s) are notified only when the alarm
threshold is first exceeded. For instance, if three processes
exceed the alarm threshold, the specified operator(s) are
notified only once. If another process subsequently exceeds the
alarm threshold while the other processes are still displayed,
the specified system operator(s) are not notified.
However, if the longest-duration stall is resolved and a new
process then becomes the newest stall to exceed the alarm
threshold, then the specified system operator(s) will be notified
of the new process.
To receive operator notification messages, the following three
OpenVMS DCL commands must be issued:
1. $ SET TERM /BROADCAST
2. $ SET BROADCAST=OPCOM
3. $ REPLY /ENABLE=(operator-classes)
The operator-classes specified in the REPLY /ENABLE command must
match those specified in the Notify qualifier to the RMU Show
Statistics command.
The operator notification message will appear similar to the
following sample message:
%%%%%%%%%%% OPCOM 19-DEC-1994 08:56:39.27 %%%%%%%%%%%
(from node MYNODE at 19-DEC-1994 08:56:39.30)
Message from user SMITH on MYNODE
Rdb Database USER2:[SMITH.WORK.AIJ]MF_PERSONNEL.RDB;1 Event Notification
Process 2082005F:1 exceeded 5 second stall: waiting for record 51:60:2 (EX)
The system operator notification message contains four lines.
Line 1 contains the OPCOM broadcast header message. Line 2
identifies the process running the RMU Show Statistics command
that sent the message. Line 3 identifies the database being
monitored. Line 4 identifies the process that triggered the
alarm, including the alarm interval and the stall message.
To establish an alarm interval for the Stall Messages screen, use
the Alarm=Interval qualifier.
If you specify the Nointeractive qualifier, bell notification is
disabled, but the broadcast message remains enabled.
33.9.4.17 – Opcom Log
Opcom_Log=filename
Noopcom
Specifies the name of the file where OPCOM messages broadcast by
attached database processes will be sent.
When recording OPCOM messages, it is possible to occasionally
miss a few messages for a specific process. When this occurs, the
message "n missed" will be displayed in the log file.
You can record specific operator classes of OPCOM messages if
you specify the Option=Verbose qualifier. The Option=Verbose
qualifier records only those messages that can be received by the
process executing the RMU Show Statistics utility. For example,
if the process is enabled to receive operator class Central, then
if you specify Opcom_Log=opcom.log the Option=Verbose qualifier
records all Central operator messages. Conversely, specifying
only the Opcom_Log=opcom.log qualifier records all database-
specific OPCOM messages generated from this node. Because the
output is captured directly from OpenVMS, the operator-specific
log file output format is different from the database-specific
contents. The following example shows the operator-specific log
file contents for the Cluster and Central operator classes:
Oracle Rdb X7.1-00 Performance Monitor OPCOM Log
Database KODA_TEST:[R_ANDERSON.TCS_MASTER]TCS.RDB;2
OPCOM Log created 11-JUN-1999 10:52:07.53
11-JUN-1999 10:52:23.85) Message from user RDBVMS on ALPHA4 Oracle Rdb X7.1-00
Event Notification for Database _$111$DUA368:[BBENTON.TEST]MF_PERSONNEL.RDB;1
AIJ Log Server terminated
11-JUN-1999 10:52:25.49) Message from user RDBVMS on ALPHA4 Oracle Rdb X7.1-00
Event Notification for Database _$111$DUA368:[BBENTON.TEST]MF_PERSONNEL.RDB;1
AIJ Log Roll-Forward Server started
11-JUN-1999 10:52:26.06) Message from user RDBVMS on ALPHA4 Oracle Rdb X7.1-00
Event Notification for Database _$111$DUA368:[BBENTON.TEST]MF_PERSONNEL.RDB;1
AIJ Log Roll-Forward Server failed
.
.
.
11-JUN-1999 10:54:21.09) Message from user RDBVMS on ALPHA4 Oracle Rdb X7.1-00
Event Notification for Database _$111$DUA368:[BBENTON.TEST.JUNK]T_
PERSONNEL.RDB;1 AIJ Log Server started
11-JUN-1999 10:54:21.13) Message from user RDBVMS on ALPHA4 Oracle Rdb X7.1-00
Event Notification for Database _$111$DUA368:[BBENTON.TEST.JUNK]T_
PERSONNEL.RDB;1 Opening "$111$DUA368:[BBENTON.TEST.JUNK]TEST1.AIJ;2"
33.9.4.18 – Options
The following keywords may be used with the Options qualifier:
o [No]All
Indicates whether or not all collectible statistics (all
statistics for all areas) are to be collected. The All option
indicates that all statistics information is to be collected;
the Noall keyword indicates that only the base statistics
information is to be collected. You must also specify the
Output qualifier. Note: Logical Area information is not
written to the binary output file.
o [No]Area
Indicates whether or not the by-area statistics information
is to be collected in addition to the base statistics
information. When you specify the Area or Noarea option, the
Base statistics are implicitly selected. You must also specify
the Output qualifier.
When the Area option is specified, statistics for all existing
storage areas are written to the binary output file; you
cannot selectively choose specific storage areas for which
statistic information is to be collected.
The size of the by-area statistics output largely depends on
the total number of storage areas in the database, including
reserved storage areas. If the database contains a large
number of storage areas, it may not be advisable to use the
Options=Area qualifier.
Before you replay a binary output file that contains by-
area statistics, specify the following command to format the
display correctly:
$ SET TERM/NOTAB
You can then replay the statistics as follows:
$ RMU/SHOW STATISTICS/INPUT=main.stats
o Base (default)
Indicates that only the base set of statistics is to be
collected; this is the default Options option. The base set of
statistics is identical to the one collected prior to Oracle
Rdb V6.1. You must also specify the Output qualifier. You
cannot specify Nobase.
o Compress
Compresses the statistics records written to the output
file specified by the Output qualifier. While replaying the
statistics, the RMU Show Statistics command determines if a
record was written using compression or not. If the record was
written using compression it is automatically decompressed.
If compression is used, the resultant binary file can be
read only by the RMU Show Statistics command. The format and
contents of a compressed file are not documented or accessible
to other applications.
o Confirm
Indicates that you wish to confirm before exiting from the
utility. You can also specify the Confirm option in the
configuration file using the CONFIRM_EXIT variable. A value
of TRUE indicates that you want to confirm before exiting the
utility and a value of FALSE (the default) indicates you do
not want to confirm before exiting the utility.
o Log_Stall_Alarm
If Log_Stall_Alarm is present when using the Stall_Log
qualifier to write stall messages to a log file and the
Alarm qualifier to set an alarm interval, only those stalls
exceeding the Alarm specified duration are written to the
stall log output file.
o Log_Stall_Lock
If you use the Stall_Log qualifier to write stall messages to
a log file, use the Nolog_Stall_Lock option to prevent lock
information from being written to the log file. If you use or
omit the Log_Stall_Lock option, lock information is written to
the log file.
o [No]Row_Cache
Indicates that all row cache related screens and features of
the RMU Show Statistics facility are to be displayed. NoRow_
Cache indicates that these features are disabled.
o Screen_Name
Allows you to identify a screen capture by screen name. If you
issue an RMU Show Statistics command with the Options=Screen_
Name qualifier, the screen capture is written to a file that
has the name of the screen with all spaces, brackets, and
slashes replaced by underscores. The file has an extension of
.SCR. For example, if you use the Option=Screen_Name qualifier
and select the Write option on the Screen Transaction
Duration (Read/Write), the screen is written to a file named
TRANSACTION_DURATION_READ_WRITE.SCR.
o Update
Allows you to update fields in the Database Dashboard. See
the Performance Monitor Help or the Oracle Rdb7 Guide to
Database Performance and Tuning for information about using
and updating the Database Dashboard. You must have both the
OpenVMS WORLD and BYPASS privileges to update fields in the
Database Dashboard.
o Verbose
Causes the stall message logging facility to report a stall
message at each interval, even if the stall message has been
previously reported.
NOTE
Use of the Options=Verbose qualifier can result in an
enormous stall messages log file. Ensure that adequate
disk space exists for the log file when you use this
qualifier.
You can enable or disable the stall messages logging Verbose
option at run time by using the Tools menu and pressing the
exclamation point (!) key.
You can also specify the Verbose option in the configuration
file by using the STALL_LOG_VERBOSE variable. Valid keywords
are ENABLED or DISABLED.
Lock information is displayed only once per stall, even in
verbose mode, to minimize the output file size.
33.9.4.19 – Output
Output=file-name
Specifies a binary statistics file into which the statistics are
written. Information in the Stall Messages screen is not recorded
in this file, however. The information in the Stall Messages
screen is highly dynamic and thus cannot be replayed using the
Input qualifier.
NOTE
Statistics from the Stall Messages display are not collected
in the binary output file.
For information on the format of the binary output file (which
changed in Oracle Rdb V6.1), see the Oracle Rdb7 Guide to
Database Performance and Tuning.
33.9.4.20 – Prompt Timeout
Prompt_Timeout=seconds
Noprompt_Timeout
Allows you to specify the user prompt timeout interval, in
seconds. The default value is 60 seconds.
If you specify the Noprompt_Timeout qualifier or the Prompt_
Timeout=0, the RMU Show Statistics command does not time out any
user prompts. Note that this can cause your database to hang.
NOTE
Oracle Corporation recommends that you do not use the
Noprompt_Timeout qualifier or the Prompt_Timeout= 0
qualifier unless you are certain that prompts will always
be responded to in a timely manner.
If the Prompt_Timeout qualifier is specified with a value greater
than 0 but less than 10 seconds, the value 10 is used. The user
prompt timeout interval can also be specified using the PROMPT_
TIMEOUT configuration variable.
33.9.4.21 – Reopen Interval
Reopen_Interval=minutes
After the specified interval, closes the current output file and
opens a new output file without requiring you to exit from the
Performance Monitor. The new output file has the same name as the
previous output file, but the version number is incremented by 1.
This qualifier allows you to view data written to the output file
while the Performance Monitor is running.
If there has been no database activity at the end of the
specified interval, the current output file is not closed and
a new output file is not created.
Be careful not to use the DCL PURGE command inadvertently. Also
note that use of the DCL SET FILE/VERSION_LIMIT command causes
older versions of the output file to be deleted automatically.
Use of the Reopen_Interval qualifier is only valid when you also
specify the Output qualifier.
33.9.4.22 – Reset
Specifies that you want the Performance Monitor to reset your
display to zero. The Reset qualifier has the same effect as
selecting the reset option from the interactive screen (except
when you specify the Reset qualifier, values are reset before
being initially displayed).
Note that this qualifier resets the values being displayed to
your output device only, it does not reset the values in the
database global section nor does it affect the data collected in
an output file.
The default behavior of the Performance Monitor is to display
each change in values that has occurred since the database was
opened. To display only the value changes that have occurred
since the Performance Monitor was invoked, specify the Reset
qualifier, or immediately select the on-screen reset option when
statistics are first displayed.
The Reset qualifier does not affect the values that are written
to the binary output file (created when you specify the Output
qualifier). Specify the Reset qualifier when you replay the
output file if you want the replay to display only the change in
values that occurred between the time the Performance Monitor was
invoked (with the Output qualifier) and the monitoring session
ended.
33.9.4.23 – Screen
Screen=screen-name
Specifies the first screen to be displayed. This is particularly
useful when you are using the Performance Monitor to
interactively monitor stalled processes. For example, the
following command automatically warns the system operator of
excessive stalls:
$ RMU/SHOW STATISTICS/ALARM=5/NOTIFY=OPER12/SCREEN="Stall Messages" -
_$ MF_PERSONNEL
The following list describes the syntax of the screen-name
argument:
o You can use any unique portion of the desired screen name for
the screen-name argument. For example, the following has the
same results as the preceding example:
$ RMU/SHOW STATISTICS/ALARM=5/NOTIFY=OPER12/SCREEN="Stall" -
_$ MF_PERSONNEL.RDB
o Except with regards to case, whatever unique portion of the
screen you supply must be an exact match to the equivalent
portion of the actual screen name.
For example Screen="Stall" is equivalent to Screen="STALL";
however Screen="Stalled" is not.
o If the specified screen-name does not match any known screen
name, the display starts with the Summary IO Statistics screen
(the default first screen). No error message is produced.
o If the screen name contains spaces, enclose the screen-name in
quotes.
o You can not specify the "by-lock" or "by-area" screens.
If you specify the Nointeractive qualifier, the Screen qualifier
is ignored.
33.9.4.24 – Stall Log
Stall_Log=file-spec
Specifies that stall messages are to be written to the specified
file. This can be useful when you notice a great number of stall
messages being generated, but do not have the resources on hand
to immediately investigate and resolve the problem. The file
generated by the Stall_Log qualifier can be reviewed later so
that the problem can be traced and resolved.
The stall messages are written to the file in a format similar to
the Stall Messages screen. Stall messages are written to the file
at the same rate as the screen refresh rate. (The refresh rate
is set with the Time qualifier or from within the Performance
Monitor with the Set_rate on-screen menu option.) Specifying a
large refresh rate minimizes the size of the file, but results
in a large number of missed stall messages. Specifying a small
refresh rate produces a large log file, but contains more of the
stall messages generated.
You do not need to be displaying the Stall Messages screen to
record the stall messages to the log file. The stall log is
maintained regardless of which screen, if any, is displayed.
By default, stall messages are not logged to a file.
33.9.4.25 – Time
Time=integer
Specifies the statistics collection interval in seconds. If
you omit this qualifier, a sample collection is made every 3
seconds. The integer has a normal range of 1 to 180 (1 second
to 3 minutes). However, if you specify a negative number for the
Time qualifier, the RMU Show Statistics command interprets the
number as hundredths of a second. For example, Time=-20 specifies
an interval of 20/100 or 1/5 of a second.
If you are running the RMU Show Statistics command interactively,
it updates the screen display at the specified interval.
If you also use the Output qualifier, a binary statistics record
is written to the output file at the specified interval. A
statistics record is not written to this file if no database
activity has occurred since the last record was written.
33.9.4.26 – Until
Until="date-time"
Specifies the time the statistics collection ends. When this
point is reached, the RMU Show Statistics command terminates
and control returns to the system command level. When the
RMU Show Statistics command is executed in a batch job, the batch
job terminates at the time specified.
An example of using the Until qualifier follows:
$ DEFINE LIB$DT_INPUT_FORMAT "!MAU !DB, !Y4 !H04:!M0:!S0.!C2"
$ RMU/SHOW STATISTICS /UNTIL="JUNE 16, 1996 17:00:00.00" -
_$ MF_PERSONNEL
This stops execution of the RMU Show Statistics command at 5 P.M.
on June 16, 1996. You can omit the date if you wish to use the
default of today's date.
You can use either an absolute or delta value to specify the data
and time.
If you do not use the Until qualifier, the RMU Show Statistics
command continues until you terminate it manually. In an
interactive session, terminate the command by pressing Ctrl/Z
or by selecting Exit from the menu. When you are running the RMU
Show Statistics command with the Nointeractive qualifier from a
terminal, terminate the command by pressing Ctrl/C or Ctrl/Y and
then selecting Exit. When you are running the RMU Show Statistics
command in a batch job, terminate the command by deleting the
batch job.
33.9.5 – Usage Notes
o Refer to the Oracle Rdb7 Guide to Database Performance and
Tuning for complete information about the RMU Show Statistics
command, including information about using formatted binary
output files from the RMU Show Statistics command.
o To use the RMU Show Statistics command for a database, you
must have the RMU$SHOW privilege in the root file ACL for the
database or the OpenVMS SYSPRV, BYPASS, or WORLD privilege.
To use the RMU Show Statistics command to display statistics
about other users, you must have the OpenVMS WORLD privilege.
To use the RMU Show Statistics command to update fields in
the Database Dashboard (specified with the Options=Update
qualifier), you must have both the OpenVMS WORLD and BYPASS
privileges.
o If a database recovery process is underway, you cannot
exit the Performance Monitor using Ctrl/Z or "E" from the
interactive display menu. You must use Ctrl/Y or wait for the
recovery process to complete. Exiting from the Performance
Monitor causes Oracle RMU to request several locks; however,
these locks cannot be granted because the recovery process
stalls all new lock requests until the recovery is complete.
o Since Oracle Rdb V4.1, a number of changes have been made to
the data structures used for the RMU Show Statistics command.
If you are having a problem with an application that accesses
the RMU Show Statistics field structures, recompile your
application with SYS$LIBRARY:RMU$SHOW_STATISTICS.CDO (or
RMU$SHOW_STATISTICSnn.CDO in a multiversion environment, where
nn is the version of Oracle Rdb you are using).
o The Oracle Rdb RMU Show Statistics command displays process
CPU times in excess of 1 day. Because the width of the CPU
time display is limited, the following CPU time display
formats are used:
- For CPU time values less than 1 day: "HH:MM:SS.CC"
- For CPU time values less than 100 days but more than 1 day:
"DD HH:MM"
- For CPU time values more than 100 days: "DDD HH:MM"
o The following caveats apply to the Cluster Statistics
Collection and Presentation feature:
- Up to 95 cluster nodes can be specified. However, use
cluster statistics collection prudently, as the system
overhead in collecting the remote statistics may be
substantial depending on the amount of information being
transmitted on the network.
- Cluster statistics are collected at the specified display
refresh rate. Therefore, set the display refresh rate to
a reasonable rate based on the number of cluster nodes
being collected. The default refresh rate of 3 seconds is
reasonable for most remote collection loads.
- If you specify the Cluster qualifier, the list of cluster
nodes applies to any database accessed during the Show
Statistics session. When you access additional databases
using the Switch Database option, the same cluster nodes
are automatically accessed. However, any nodes that you
added manually using the Cluster Statistics menu are
not automatically added to the new database's remote
collection.
In other words, manually adding and deleting cluster nodes
affects only the current database and does not apply to
any other database that you may have accessed during the
session. For example, when you run the Show Statistics
utility on node ALPHA3 with manually added node BONZAI,
subsequently switching to BONZAI as the current node will
not display cluster statistics from node ALPHA3 unless you
manually add that node. Furthermore, switching back to node
ALPHA3 as the current node loses the previous collection of
node BONZAI because it was manually added.
- Both DECnet and TCP/IP network protocols are supported.
By default, the DECnet protocol is used. To explicitly
specify which network protocol to use, define the RDM$BIND_
STT_NETWORK_TRANSPORT to DECNET or TCPIP respectively. The
RDM$BIND_STT_NETWORK_TRANSPORT logical name must be defined
to the same definition on both the local and cluster nodes.
The RDM$BIND_STT_NETWORK_TRANSPORT logical name can be
specified in LNM$FILE_DEV on the local node but must be
specified in the LNM$SYSTEM_TABLE on all remote nodes.
NOTE
There is no command qualifier to specify the network
protocol.
- The Output qualifier continues to work as usual, but when
in cluster mode writes the cluster statistics information
to the binary output file.
- The Cluster qualifier cannot be specified with the Input
qualifier. Furthermore, the online selection of cluster
nodes is not available when you use the Input qualifier.
- While the collection and presentation feature is active,
all on-screen menu options continue to operate as usual.
This includes the time-plot, scatter-plot, screen pause,
and various other options.
- There is no way to exclude the current node from statistics
collection. Log in to another node if you want to do this.
- The cluster collection of per-process stall information
automatically detects the binding or unbinding of processes
to cluster databases. There is no need to manually refresh
the database information on the current node.
- If the database is not currently open on the specified
node, Oracle RMU still attempts to collect cluster
statistics. However, you must open the remote database
prior to regular process attaches.
- When you display any of the per-process screens that
support cluster statistics collection, such as the Stall
Messages screen, you can zoom in on any of the displayed
processes to show which node that process is using.
- Using the Cluster Statistics submenu from the Tools menu,
it is also possible to collect statistics from all open
database nodes using the Collect From Open Database Nodes
menu option. This option simplifies the DBA's job of
remembering where the database is currently open. However,
subsequently opened nodes are not automatically added to
the collection; these must be manually added.
- The cluster statistics collection is an intracluster
feature in that it works only on the same database, using
the same device and directory specification used to run the
initial RMU Show Statistics command (that is, on a shared
disk). The cluster statistics collection does not work
across clusters (intercluster).
- When you replay a binary output file, the screen header
region accurately reflects the number of cluster nodes
whose statistics are represented in the output file.
33.9.6 – Examples
Example 1
The following example directs the results of the RMU Show
Statistics command to an output file:
$ RMU/SHOW STATISTICS MF_PERSONNEL/OUTPUT=PERS.LOG
Example 2
The following example formats the binary results created in the
previous example and produces a readable display:
$ RMU/SHOW STATISTICS/INPUT=PERS.LOG
Example 3
The following DCL script shows a complete example of how to
create an excessive stall notification server using the operator
notification facility. To execute this script, submit it to any
queue on the node from which you want to run the script. Supply
the parameters as follows:
o P1 is the database pathname.
o P2 is the completion time.
o P3 is the set of operators to be notified. You must enclose
the list of operators in quotes.
$ VERIFY = F$VERIFY(0)
$ SET NOON
$!
$! Get the database name.
$!
$ IF P1 .EQS. "" THEN INQUIRE P1 "_database"
$!
$! Get the termination date/time.
$!
$ IF P2 .EQS. "" THEN INQUIRE P2 "_until"
$!
$! Get the operator classes.
$!
$ IF P3 .EQS. "" THEN INQUIRE P3 "_operators"
$!
$ RMU/SHOW STATISTICS/TIME=1/NOBROADCAST -
/NOINTERACTIVE /UNTIL="''P2'" /ALARM=5 /NOTIFY='P3 -
'P1
$ VERIFY = F$VERIFY(VERIFY)
$ EXIT
Example 4
You can use the Lock_Timeout or Deadlock qualifiers to construct
a Lock Event Logging server. The following OpenVMS DCL script
shows how to create a server that logs both lock timeout and
lock deadlock events on the MF_PERSONNEL database for the next 15
minutes:
$ RMU/SHOW STATISTICS /NOHISTOGRAM /TIME=1 /NOINTERACTIVE -
_$ /LOCK_TIMEOUT_LOG=TIMEOUT.LOG /DEADLOCK_LOG=DEADLOCK.LOG -
_$ /NOBROADCAST /UNTIL="+15:00" MF_PERSONNEL
Example 5
The following example shows stall log information first with and
then without the lock information:
$ RMU /SHOW STATISTICS /NOINTERACTIVE /STALL_LOG=SYS$OUTPUT: -
_$ DUA0:[DB]MFP.RDB
Oracle Rdb X7.1-00 Performance Monitor Stall Log
Database DPA500:[RDB_RANDOM.RDB_RANDOM_TST_247]RNDDB.RDB;1
Stall Log created 4-SEP-2001 11:27:03.96
11:27:03.96 0002B8A1:1 11:27:03.67 waiting for record 118:2:2 (PR)
State... Process.ID Process.name... Lock.ID. Rq Gr Queue "record 118:2:2"
Blocker: 000220A7 RND_TST_24716 0F019E52 EX Grant
Waiting: 0002B8A1 RND_TST_24715 4500C313 PR Wait
11:27:03.96 0002B8A8:1 11:27:02.32 waiting for record 101:3:0 (EX)
State... Process.ID Process.name... Lock.ID. Rq Gr Queue "record 101:3:0"
Blocker: 000220AD RND_TST_24710 0B00176A PR Grant
Blocker: 000220A7 RND_TST_24716 52018A3F PR Grant
Waiting: 0002B8A8 RND_TST_2474 3C00B5AF EX PR Cnvrt
11:27:03.96 0002B89C:1 11:27:00.15 waiting for record 114:4:1 (PR)
State... Process.ID Process.name... Lock.ID. Rq Gr Queue "record 114:4:1"
Blocker: 000220A7 RND_TST_24716 180033CC EX Grant
Waiting: 0002B89C RND_TST_2479 110066BA PR Wait
$ RMU /SHOW STATISTICS /NOINTERACTIVE /STALL_LOG=SYS$OUTPUT: -
_$ DUA0:[DB]MFP.RDB /OPTIONS=NOLOG_STALL_LOCK
Oracle Rdb X7.1-00 Performance Monitor Stall Log
Database DPA500:[RDB_RANDOM.RDB_RANDOM_TST_247]RNDDB.RDB;1
Stall Log created 4-SEP-2001 11:28:34.68
11:28:34.69 0002B8B8:1 11:28:33.69 waiting for logical area 146 (PR)
11:28:34.69 0002B8A8:1 11:28:32.76 waiting for record 114:4:2 (PR)
11:28:34.69 0002B8B3:1 11:28:33.06 waiting for record 114:4:2 (PR)
11:28:34.69 0002B8B0:1 11:28:31.96 waiting for record 111:7:7 (EX)
33.10 – System
Displays a summary of which databases are in use on a particular
node, the monitor log file specification, the number of monitor
buffers available, and if after-image journal (AIJ) backup
operations have been suspended.
This command is the same as the RMU Show Users command, except
that it has no root-file-spec parameter. You can use it to see
systemwide user information only.
33.10.1 – Description
The RMU Show System command displays information about all active
database users on a particular node.
33.10.2 – Format
(B)0[m RMU/Show System
[4mCommand[m [4mQualifier[m x [4mDefault[m
x
/Output[=file-name] x /Output = SYS$OUTPUT
33.10.3 – Command Qualifiers
33.10.3.1 – Output
Output[=file-name]
Specifies the name of the file where output will be sent. The
default is SYS$OUTPUT. The default output file extension is .lis,
if you specify only a file name without an extension.
33.10.4 – Usage Notes
o To use the RMU Show System command, you must have the OpenVMS
WORLD privilege.
o When the database monitor is completely idle, identified in
the output of the RMU Show Users command by the "no databases
accessed on this node" message, the number of available
monitor messages should be 1 less than the maximum. During
periods of monitor activity, it is normal for the number
of available monitor buffers to be less than the maximum,
depending on how much work remains for the monitor to process.
33.10.5 – Examples
Example 1
The following command lists the file specification for the
monitor log file and databases currently in use.
$ RMU/SHOW SYSTEM
Oracle Rdb V7.0-64 on node NODEA 27-JUN-2002 16:23:43.92
- monitor started 26-JUN-2002 06:33:07.33 (uptime 1 09:50:36)
- monitor log filename is "$111$DUA366:[RDMMON_LOGS]RDMMON701_NODEA.LOG"
database $111$DUA619:[JONES.DATABASES.V70]MF_PERSONNEL.RDB;1
- first opened 27-JUN-2002 16:23:42.11 (elapsed 0 00:00:01)
* database is opened by an operator
database NODEB$DKB200:[RDB$TEST_SYSTEM.A70_RMU_4Z.SCRATCH]M_TESTDB.RDB;3
- first opened 26-JUN-2002 23:24:41.55 (elapsed 0 16:59:02)
* database is opened by an operator
* After-image backup operations temporarily suspended from this node
- current after-image journal file is DISK$RDBTEST8:[RDB$TEST_SYSTEM.A70_RMU
_4Z]TEST3.AIJ;2
- AIJ Log Server is active
- 1 active database user
33.11 – Users
Displays information about active database users, the monitor
log file specification, the number of monitor buffers available,
and if after-image journal (AIJ) backup operations have been
suspended. It allows you to see the user activity of specified
databases on a specific node, and identifies the various nodes in
the VMScluster where the database is currently open and available
for use. In addition, if you are using Oracle Rdb for OpenVMS
Alpha, this command indicates whether or not system space global
sections are enabled.
If you are interested in information on users for a cluster, use
the RMU Dump command with the Users qualifier.
33.11.1 – Description
The RMU Show Users command displays information about all active
database users or users of a particular database, the file
specification for the monitor log file, the number of monitor
buffers available, and if AIJ backup operations have been
suspended.
This command also displays global buffer information for the node
on which the RMU Show Users command is issued and displays global
buffer information for the specified database only if global
buffers are enabled for that database.
33.11.2 – Format
(B)0[m RMU/Show Users [root-file-spec]
[4mCommand[m [4mQualifier[m x [4mDefault[m
x
/Output[=file-name] x /Output = SYS$OUTPUT
33.11.3 – Parameters
33.11.3.1 – root-file-spec
The root file specification of the database for which you want
information. This parameter is optional. If you specify it, only
users of that database are shown. Otherwise, all users of all
active databases on your current node are shown.
33.11.4 – Command Qualifiers
33.11.4.1 – Output
Output[=file-name]
Specifies the name of the file where output will be sent. The
default is SYS$OUTPUT. The default output file extension is .lis,
if you specify a file name.
33.11.5 – Usage Notes
o To use the RMU Show Users command for a specified database,
you must have the RMU$SHOW, RMU$BACKUP, or RMU$OPEN privilege
in the root file access control list (ACL) of the database, or
the OpenVMS WORLD privilege.
To use the RMU Show Users command without specifying a
database, you must have the RMU$SHOW, RMU$BACKUP, or RMU$OPEN
privilege in the root file ACL of the database or databases,
and the OpenVMS WORLD privilege.
o When the database monitor is completely idle, identified in
the output of the RMU Show Users command by the "no databases
accessed on this node" message, the number of available
monitor messages should be 1 less than the maximum. During
periods of monitor activity, it is normal for the number
of available monitor buffers to be less than the maximum,
depending on how much work remains for the monitor to process.
33.11.6 – Examples
Example 1
The following command lists current users information in the file
DBUSE.LIS:
$ RMU/SHOW USERS/OUTPUT=DBUSE
Example 2
The following example shows all active users:
$ RMU/SHOW USER
Oracle Rdb V7.0-64 on node NODEA 27-JUN-2002 16:25:49.64
- monitor started 26-JUN-2002 06:33:07.33 (uptime 1 09:52:42)
- monitor log filename is "$DISK1:[LOGS]MON701_NODEA.LOG;12"
database DISK2:[TEST]M_TESTDB.RDB;3
- first opened 26-JUN-2002 23:24:41.55 (elapsed 0 17:01:08)
* database is opened by an operator
* After-image backup operations temporarily suspended from this node
- current after-image journal file is DISK3:[TEST1]TEST3.AIJ;2
- AIJ Log Server is active
- 1 active database user
- database also open on these nodes:
NODEB
- 23225948:1 - RDM_4 - non-utility server, USER1 - active user
- image DISK4:[SYS1.SYSCOMMON.][SYSEXE]RDMALS701.EXE;567
33.12 – Version
Displays the currently executing Oracle Rdb software version
number and the version of Oracle Rdb required to access the
specified database.
33.12.1 – Description
This command is useful when you have multiple versions of Oracle
Rdb running on your system and perhaps multiple databases. If
the currently executing version of Oracle Rdb is not the version
required to access the database, change the current version of
Oracle Rdb to the required version. See Example 3 in the Examples
help entry under this command.
33.12.2 – Format
(B)0[m RMU/Show Version [root-file-spec]
[4mCommand[m [4mQualifier[m x [4mDefault[m
x
/Output[=file-name] x /Output = SYS$OUTPUT
33.12.3 – Parameters
33.12.3.1 – root-file-spec
A database root file specification. The default file extension
is .rdb. If you do not specify a database root file, RMU Show
Version displays only the version of Oracle Rdb under which
Oracle RMU is currently running.
33.12.4 – Command Qualifiers
33.12.4.1 – Output
Output[=file-name]
Specifies the name of the file where output will be sent. The
default is SYS$OUTPUT. The default output file extension is .lis,
if you specify a file name.
33.12.5 – Usage Notes
o You do not need any special privileges to use the RMU Show
Version command.
o When the RMU Show Version command executes, it sets the
following two DCL local symbols:
- RMU$RDB_VERSION
Set to the currently executing version of Oracle Rdb
- RMU$DATABASE_VERSION
Set to the version of Oracle Rdb required to access the
specified database
If you want to set the DCL symbols, RMU$RDB_VERSION and
RMU$DATABASE_VERSION, only and do not want the RMU Show
Version output, specify the null device as the file name with
the Output qualifier. For example:
$ RMU/SHOW VERSION MF_PERSONNEL /OUTPUT=NL:
$ SHOW SYMBOL RMU$RDB_VERSION
RMU$RDB_VERSION = "7.0"
$ SHOW SYMBOL RMU$DATABASE_VERSION
RMU$DATABASE_VERSION = "6.1"
33.12.6 – Examples
Example 1
The following command displays the current version of Oracle Rdb
software:
$ RMU/SHOW VERSION
Executing RMU for Oracle Rdb V7.0-64
Example 2
The following command displays the current version of Oracle Rdb
software and the version of Oracle Rdb required to access the mf_
personnel database:
$ RMU/SHOW VERSION MF_PERSONNEL
Executing RMU for Oracle Rdb V7.0-64
Database DISK:[MYDIR]MF_PERSONNEL.RDB;1 requires version 7.0
Example 3
The following example demonstrates how you might use the RMU Show
Version command to determine how to access a database that is
incompatible with the currently executing version of Oracle Rdb:
$ ! The RMU Show Version command tells you that the currently
$ ! executing version of Oracle Rdb is Version 7.0, but
$ ! that mf_personnel requires Version 6.1.
$
$ RMU/SHOW VERSION MF_PERSONNEL
Executing RMU for Oracle Rdb V7.0-00
Database DISK:[MYDIR]MF_PERSONNEL.RDB;1 requires version 6.1
$
$ ! If you ignore this information and attempt to attach to the
$ ! database, you receive an error.
$
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
%SQL-F-ERRATTDEC, Error attaching to database MF_PERSONNEL
-RDB-F-WRONG_ODS, the on-disk structure of database filename is
not supported by version of facility being used
-RDMS-F-ROOTMAJVER, database format 61.0 is not compatible
with software version 70.0
SQL> EXIT;
$ ! Assign the currently executing version of Oracle Rdb to
$ ! RMU$PREV_VERSION
$ !
$ rmu$prev_version := 'rmu$rdb_version'
$ !
$ ! Use the RDB$SETVER.COM command file to set the version of
$ ! Oracle Rdb to the version required by mf_personnel.
$ ! (For more information on the RDB$SETVER.COM command
$ ! file, see the Oracle Rdb Installation and Configuration Guide.)
$ !
$ @SYS$LIBRARY:RDB$SETVER 'RMU$DATABASE_VERSION'
$ !
$ ! Re-execute the RMU Show Version command to confirm that you have
$ ! the version of Oracle Rdb set correctly.
$ !
$ RMU/SHOW VERSION MF_PERSONNEL
Executing RMU for Oracle Rdb V6.1-00
Database DISK:[MYDIR]MF_PERSONNEL.RDB;1 requires version 6.1
$ ! Invoke SQL and attach to the mf_personnel database.
$ !
$ SQL
SQL>ATTACH 'FILENAME MF_PERSONNEL';
SQL> SHOW TABLES
User tables in database with filename MF_PERSONNEL
CANDIDATES
COLLEGES
CURRENT_INFO A view.
CURRENT_JOB A view.
CURRENT_SALARY A view.
DEGREES
DEPARTMENTS
EMPLOYEES
JOBS
JOB_HISTORY
RESUMES
SALARY_HISTORY
WORK_STATUS
SQL> EXIT
$ !
$ !Reset the executing version of Oracle Rdb to the original setting.
$ !
$ @SYS$LIBRARY:RDB$SETVER 'RMU$PREV_VERSION'
34 – Unload
There are two RMU Unload commands, as follows:
o An RMU Unload command without the After_Journal qualifier
copies the data from a specified table or view of the database
into either a specially structured file that contains both the
data and the metadata or into an RMS file that contains data
only.
o An RMU Unload command with the After_Journal qualifier
extracts added, modified, and deleted record contents from
committed transactions from specified tables in one or more
after-image journal files.
34.1 – Database
Copies the data from a specified table or view of the database
into one of the following:
o A specially structured file that contains both the data and
the metadata (.unl).
o An RMS file that contains data only (.unl). This file is
created when you specify the Record_Definition qualifier.
(The Record_Definition qualifier also creates a second file,
with file extension .rrd, that contains the metadata.)
Data from the specially structured file can be reloaded by using
the RMU Load command only. Data from the RMS file can be reloaded
using the RMU Load command or by using an alternative utility
such as is offered by DATATRIEVE.
34.1.1 – Description
The RMU Unload command copies data from a specified table or view
and places it in a specially structured file or in an RMS file.
Be aware that the RMU Unload command does not remove data from
the specified table; it merely makes a copy of the data.
The RMU Unload command can be used to do the following:
o Extract data for an application that cannot access the Oracle
Rdb database directly.
o Create an archival copy of data.
o Perform restructuring operations.
o Sort data by defining a view with a sorted-by clause, then
unloading that view.
The specially structured files created by the RMU Unload command
contain metadata for the table that was unloaded. The RMS files
created by the RMU Unload command contain only data; the metadata
can be found either in the data dictionary or in the .rrd file
created using the Record_Definition qualifier. Specify the
Record_Definition qualifier to exchange data with an application
that uses RMS files.
The LIST OF BYTE VARYING (segmented string) data type cannot be
unloaded into an RMS file; however, it can be unloaded into the
specially structured file type.
Data type conversions are valid only if Oracle Rdb supports the
conversion.
The RMU Unload command executes a read-only transaction to gather
the metadata and user data to be unloaded. It is compatible with
all operations that do not require exclusive access.
34.1.2 – Format
(B)0[mRMU/Unload root-file-spec table-name output-file-name
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Allocation=n x /Allocation=2048
/Buffers=n x See description
/Commit_Every=n x None
/[No]Compression[=options] x /Nocompression
/Debug_Options={options} x See description
/Delete_Rows x None
/[No]Error_Delete x See description
/Extend_Quantity=number-blocks x /Extend_Quantity=2048
/Fields=(column-name-list) x See description
/Flush={Buffer_End|On_Commit} x See description
/[No]Limit_To=n x /Nolimit_To
/Optimize={options} x None
/Record_Definition={([No]File|Path)=name,options} x See description
/Reopen_Count=n x None
/Row_Count=n x See description
/Statistics_Interval=seconds x See description
/Transaction_Type[=(transaction_mode,options...)] x See description
/[No]Virtual_Fields[=[No]Automatic,[No]Computed_By] x /Novirtual_Fields
34.1.3 – Parameters
34.1.3.1 – root-file-spec
The root file specification of the database from which tables or
views will be unloaded. The default file extension is .rdb.
34.1.3.2 – table-name
The name of the table or view to be unloaded, or its synonym.
34.1.3.3 – output-file-name
The destination file name. The default file extension is .unl.
34.1.4 – Command Qualifiers
34.1.4.1 – Allocation
Allocation=n
Enables you to preallocate the generated output file. The
default allocation is 2048 blocks; when the file is closed it
is truncated to the actual length used.
If the value specified for the Allocation qualifier is less
than 65535, it becomes the new maximum for the Extend_Quantity
qualifier.
34.1.4.2 – Buffers
Buffers=n
Specifies the number of database buffers used for the unload
operation. If no value is specified, the default value for
the database is used. Although this qualifier might affect
the performance of the unload operation, the default number of
buffers for the database usually allows adequate performance.
34.1.4.3 – Commit Every
Commit_Every=n
Turns the selection query into a WITH HOLD cursor so that the
data stream is not closed by a commit. Refer to the Oracle Rdb7
SQL Reference Manual for more information about the WITH HOLD
clause.
34.1.4.4 – Compression
Compression[=options]
NoCompression
Data compression is applied to the user data unloaded to the
internal (interchange) format file. Table rows, null byte vector
and LIST OF BYTE VARYING data are compressed using either the LZW
(Lempel-Ziv-Welch) technique or the ZLIB algorithm developed by
Jean-loup Gailly and Mark Adler. Table metadata (column names and
attributes) are never compressed and the resulting file remains
a structured interchange file. Allowing compression allows the
result data file to be more compact, using less disk space and
permitting faster transmission over communication lines. This
file can also be processed using the RMU Dump Export command.
The default value is Nocompression.
This qualifier accepts the following optional keywords (ZLIB is
the default if no compression algorithm is specified):
o LZW
Selects the LZW compression technique.
o ZLIB
Selects the ZLIB compression technique. This can be modified
using the LEVEL option.
o LEVEL=number
ZLIB allows further tuning with the LEVEL option that accepts
a numeric level between 1 and 9. The default of 6 is usually
a good trade off between result file size and the CPU cost of
the compression.
o EXCLUDE_LIST[=(column-name,...)]
It is possible that data in LIST OF BYTE VARYING columns is
already in a compressed format (for instance images as JPG
data) and therefore need not be compressed by RMU Unload.
In fact, compression in such cases might actually cause
the output to grow. The EXCLUDE_LIST option will disable
compression for LIST OF BYTE VARYING columns. Specific column
names can be listed, or if omitted, all LIST OF BYTE VARYING
columns will be excluded from compression.
Only the user data is compressed. Therefore, additional
compression may be applied using various third party compression
tools, such as ZIP. It is not the goal of RMU to replace such
tools.
The qualifier RECORD_DEFINITION (or RMS_RECORD_DEF) is not
compatible /COMPRESSION. Note that the TRIM option for DELIMITED
format output can be used to trim trailing spaces from VARCHAR
data.
34.1.4.5 – Debug Options
Debug_Options={options}
The Debug_Options qualifier allows you to turn on certain debug
functions. The Debug_Options qualifier accepts the following
options:
o [NO]TRACE
Traces the qualifier and parameter processing performed by
RMU Unload. In addition, the query executed to read the table
data is annotated with the TRACE statement at each Commit
(controlled by Commit_Every qualifier). When the logical name
RDMS$SET_FLAGS is defined as "TRACE", then a line similar to
the following is output after each commit is performed.
~Xt: 2009-04-23 15:16:16.95: Commit executed.
The default is NOTRACE.
$RMU/UNLOAD/REC=(FILE=WS,FORMAT=CONTROL) SQL$DATABASE WORK_STATUS WS/DEBUG=TRACE
Debug = TRACE
* Synonyms are not enabled
Row_Count = 500
Message buffer: Len: 13524
Message buffer: Sze: 27, Cnt: 500, Use: 4 Flg: 00000000
%RMU-I-DATRECUNL, 3 data records unloaded.
o [NO]FILENAME_ONLY
When the qualifier Record_Definition=Format:CONTROL is used,
the name of the created unload file is written to the control
file (.CTL). When the keyword FILENAME_ONLY is specified, RMU
Unload will prune the output file specification to show only
the file name and type. The default is NOFILENAME_ONLY.
$RMU/UNLOAD/REC=(FILE=TT:,FORMAT=CONTROL) SQL$DATABASE WORK_STATUS WS/DEBUG=
FILENAME
--
-- SQL*Loader Control File
-- Generated by: RMU/UNLOAD
-- Version: Oracle Rdb X7.2-00
-- On: 23-APR-2009 11:12:46.29
--
LOAD DATA
INFILE 'WS.UNL'
APPEND
INTO TABLE "WORK_STATUS"
(
STATUS_CODE POSITION(1:1) CHAR NULLIF (RDB$UL_NB1 = '1')
,STATUS_NAME POSITION(2:9) CHAR NULLIF (RDB$UL_NB2 = '1')
,STATUS_TYPE POSITION(10:23) CHAR NULLIF (RDB$UL_NB3 = '1')
-- NULL indicators
,RDB$UL_NB1 FILLER POSITION(24:24) CHAR -- indicator for
STATUS_CODE
,RDB$UL_NB2 FILLER POSITION(25:25) CHAR -- indicator for
STATUS_NAME
,RDB$UL_NB3 FILLER POSITION(26:26) CHAR -- indicator for
STATUS_TYPE
)
%RMU-I-DATRECUNL, 3 data records unloaded.
o [NO]HEADER
This keyword controls the output of the header in the control
file. To suppress the header use NOHEADER. The default is
HEADER.
o APPEND, INSERT, REPLACE, TRUNCATE
These keywords control the text that is output prior to the
INTO TABLE clause in the control file. The default is APPEND,
and only one of these options can be specified.
34.1.4.6 – Delete Rows
Specifies that Oracle Rdb delete rows after they have been
unloaded from the database. You can use this qualifier with the
Commit_Every qualifier to process small batches of rows.
If constraints, triggers, or table protection prevent the
deletion of rows, the RMU Unload operation will fail. The Delete_
Rows qualifier cannot be used with non-updatable views, those
containing joins, or aggregates (union or group by).
34.1.4.7 – Error Delete
Noerror_Delete
Specifies whether the unload and record definition files should
be deleted on error. By default, the RMU Unload command deletes
the unload and record definition files if an unrecoverable error
occurs that causes an abnormal termination of the unload command
execution. Use the Noerror_Delete qualifier to retain the files.
If the Delete_Rows qualifier is specified, the default for this
qualifier is Noerror_Delete. This default is necessary to allow
you to use the unload and record definition files to reload the
data if an unrecoverable error has occurred after the delete of
some of the unloaded rows has been committed. Even if the unload
file is retained, it may not be able to reload the data using the
RMU Load command if the error is severe enough to prevent the RMU
error handler from continuing to access the unload file once the
error is detected.
If the Delete_Rows qualifier is not specified, the default is
Error_Delete.
34.1.4.8 – Extend Quantity
Extend_Quantity=number-blocks
Sets the size, in blocks, by which the unload file (.unl) can
be extended. The minimum value for the number-blocks parameter
is 1; the maximum value is 65535. If you provide a value for the
Allocation qualifier that is less than 65535, that value becomes
the maximum you can specify.
If you do not specify the Extend_Quantity qualifier, the default
block size by which .unl files can be extended is 2048 blocks.
34.1.4.9 – Fields
Fields=(column-name-list)
Specifies the column or columns of the table or view to be
unloaded from the database. If you list multiple columns,
separate the column names with a comma, and enclose the list
of column names within parentheses. This qualifier also specifies
the order in which the columns should be unloaded if that order
differs from what is defined for the table or view. Changing the
structure of the table or view could be useful when restructuring
a database or when migrating data between two databases with
different metadata definitions. The default is all the columns
defined for the table or view in the order defined.
34.1.4.10 – Flush
Flush=Buffer_End
Flush=On_Commit
Controls when internal RMS buffers are flushed to the unload
file. By default, the RMU Unload command flushes any data left
in the internal RMS file buffers only when the unload file is
closed. The Flush qualifier changes that behavior. You must use
one of the following options with the Flush qualifier:
o Buffer_End
The Buffer_End option specifies that the internal RMS buffers
be flushed to the unload file after each unload buffer has
been written to the unload file.
o On_Commit
The On_Commit option specifies that the internal RMS buffers
be flushed to the unload file just before the current unload
transaction is committed.
If the Delete_Rows qualifier is specified, the default for this
qualifier is Flush=On_Commit. This default is necessary to allow
you to use the unload and record definition files to reload the
data if an unrecoverable error has occurred after the delete of
some of the unloaded rows has been committed.
If the Delete_Rows qualifier is not specified, the default is to
flush the record definition buffers only when the unload files
are closed.
More frequent flushing of the internal RMS buffers will avoid the
possible loss of some unload file data if an error occurs and the
Noerror_Delete qualifer has been specified. Additional flushing
of the RMS internal buffers to the unload file can cause the RMU
Unload command to take longer to complete.
34.1.4.11 – Limit To
Limit_To=n
Nolimit_To
Limits the number of rows unloaded from a table or view. The
primary use of the Limit_To qualifier is to unload a data sample
for loading into test databases. The default is the Nolimit_To
qualifier.
34.1.4.12 – Optimize
Optimize={options}
Controls the query optimization of the RMU Unload command. You
must use one or more of the following options with the Optimize
qualifier:
o Conformance={Optional|Mandatory}
This option accepts two keywords, Optional or Mandatory, which
can be used to override the settings in the specified query
outline.
If the matching query outline is invalid, the
Conformance=Mandatory option causes the query compile, and
hence the RMU Unload operation, to stop. The query outline
will be one which either matches the string provided by
the Using_Outline or Name_As option or matches the query
identification.
The default behavior is to use the setting within the query
outline. If no query outline is found, or query outline usage
is disabled, then this option is ignored.
o Fast_First
This option asks the query optimizer to favor strategies that
return the first rows quickly, possibly at the expense of
longer overall retrieval time. This option does not override
the setting if any query outline is used.
This option cannot be specified at the same time as the Total_
Time option.
NOTE
Oracle Corporation does not recommend this optimization
option for the RMU Unload process. It is provided only
for backward compatibility with prior Rdb releases when
it was the default behavior.
o Name_As=query_name
This option supplies the name of the query. It is used to
annotate output from the Rdb debug flags (enabled using the
logical RDMS$SET_FLAGS) and is also logged by Oracle TRACE.
If the Using_Outline option is not used, this name is also
used as the query outline name.
o Selectivity=selectivity-value
This option allows you to influence the Oracle Rdb query
optimizer to use different selectivity values.
The Selectivity option accepts the following keywords:
- Aggressive - assumes a smaller number of rows is selected
compared to the default Oracle Rdb selectivity
- Sampled - uses literals in the query to perform preliminary
estimation on indices
- Default - uses default selectivity rules
The following example shows a use of the Selectivity option:
$RMU/UNLOAD/OPTIMIZE=(TOTAL_TIME,SELECTIVITY=SAMPLED) -
_$ SALES_DB CUSTOMER_TOP10 TOP10.UNL
This option is most useful when the RMU Unlaod command
references a view definition with a complex predicate.
o Sequential_Access
This option requests that index access be disabled for this
query. This is particularly useful for RMU Unload from views
against strictly partitioned tables. Strict partitioning is
enabled by the PARTITIONING IS NOT UPDATABLE clause on the
CREATE or ALTER STORAGE MAP statements. Retrieval queries
only use this type of partition optimization during sequential
table access.
This option cannot be specified at the same time as the Using_
Outline option.
o Total_Time
This option requests that total time optimization be applied
to the unload query. It does not override the setting if any
query outline is used.
In some cases, total time optimization may improve performance
of the RMU Unload command when the query optimizer favors
overall performance instead of faster retrieval of the first
row. Since the RMU Unload process is unloading the entire set,
there is no need to require fast delivery of the first few
rows.
This option may not be specified at the same time as the Fast_
First option. The Optimize=Total_Time behavior is the default
behavior for the RMU Unload command if the Optimize qualifier
is not specified.
o Using_Outline=outline_name
This option supplies the name of the query outline to be
used by the RMU Unload command. If the query outline does
not exist, the name is ignored.
This option may not be specified at the same time as the
Sequential_Access option.
34.1.4.13 – Record Definition
Record_Definition=[File=name,options]
Record_Definition=[Path=name,options]
Record_Definition=Nofile
Creates an RMS file containing the record structure definition
for the output file. The record description uses the CDO record
and field definition format. The default file extension is .rrd.
If you omit the File=name or Path=name option you must specify an
option.
The date-time syntax in .rrd files generated by this qualifier
changed in Oracle Rdb V6.0 to make the .rrd file compatible with
the date-time syntax support for Oracle CDD/Repository V6.1. The
RMU Unload command accepts both the date-time syntax generated
by the Record_Definition qualifier in previous versions of Oracle
Rdb and the syntax generated in Oracle Rdb V6.0 and later.
See the help entry for RRD_File_Syntax for more information on
.rrd files and details on the date-time syntax generated by this
qualifier.
The options are:
o Format=(Text)
If you specify the Format=(Text) option, Oracle RMU converts
all data to printable text before unloading it.
o Format=Control
The Format=Control option provides support for SQL*Loader
control files and portable data files. The output file
defaults to type .CTL.
FORMAT=CONTROL implicitly uses a portable data format as TEXT
rather than binary values. The unloaded data files are similar
to that generated by FORMAT=TEXT but includes a NULL vector to
represent NULL values ('1') and non-NULL values ('0').
The SQL*Loader control file uses this NULL vector to set NULL
for the data upon loading.
When FORMAT=CONTROL is used, the output control file and
associated data file are intended to be used with the Oracle
RDBMS SQL*Loader (sqlldr) command to load the data into an
Oracle RDBMS database table. LIST OF BYTE VARYING (SEGMENTED
STRING) columns are not unloaded.
The keywords NULL, PREFIX, SEPARATOR, SUFFIX, and TERMINATOR
only apply to DELIMITED_TEXT format and may not be used in
conjunction with the CONTROL keyword.
DATE VMS data is unloaded including the fractional seconds
precision. However, when mapped to Oracle DATE type in the
control file, the fractional seconds value is ignored. It
is possible to modify the generated control file to use the
TIMESTAMP type and add FF to the date edit mask.
NOTE
The RMU Load command does not support loading data using
FORMAT=Control.
o Format=XML
The Format=XML option causes the output Record_Definition file
type to default to .DTD (Document Type Definition). The output
file defaults to type .XML. The contents of the data file is
in XML format suitable for processing with a Web browser or
XML application.
If you use the Nofile option or do not specify the File or
Path keyword, the DTD is included in the XML output file
(internal DTD). If you specify a name with the File or Path
keyword to identify an output file, the file is referenced as
an external DTD from within the XML file.
The XML file contains a single table that has the name of the
database and multiple rows named <RMU_ROW>. Each row contains
the values for each column in printable text. If a value is
NULL, then the tag <NULL/> is displayed. Example 16 shows this
behavior.
NOTE
The RMU Load command does not support loading data using
FORMAT=XML.
o Format=(Delimited_Text [,delimiter-options])
If you specify the Format=Delimited_Text option, Oracle RMU
applies delimiters to all data before unloading it.
Note that DATE VMS dates are output in the collatable time
format, which is yyyymmddhhmmsscc. For example, March 20, 1993
is output as: 1993032000000000.
If the Format option is not used, Oracle RMU outputs data to
a fixed-length binary flat file. If the Format=Delimited_Text
options is not used, VARCHAR(n) strings are padded with blanks
when the specified string has fewer characters than n so that
the resulting string is n characters long.
Delimiter options (and their default values if you do not
specify delimiter options) are:
- Prefix=string
Specifies a prefix string that begins any column value in
the ASCII output file. If you omit this option, the column
prefix will be a quotation mark (").
- Separator=string
Specifies a string that separates column values of a row.
If you omit this option, the column separator will be a
single comma (,).
- Suffix=string
Specifies a suffix string that ends any column value in
the ASCII output file. If you omit this option, the column
suffix will be a quotation mark (").
- Terminator=string
Specifies the row terminator that completes all the column
values corresponding to a row. If you omit this option, the
row terminator will be the end of the line.
- Null=string
Specifies a string, which when found in the database
column, is unloaded as NULL in the output file.
The Null option can be specified on the command line as any
one of the following:
* A quoted string
* An empty set of double quotes ("")
* No string
The string that represents the null character must be
quoted on the Oracle RMU command line. You cannot specify a
blank space or spaces as the null character. You cannot use
the same character for the Null value and other Delimited_
Text options.
NOTE
The values of each of the strings specified in the
delimiter options must be enclosed within quotation
marks. Oracle RMU strips these quotation marks while
interpreting the values. If you want to specify a
quotation mark (") as a delimiter, specify a string
of four quotation marks. Oracle RMU interprets four
quotation marks as your request to use one quotation
mark as a delimiter. For example, Suffix = """".
Oracle RMU reads these quotation marks as follows:
o The first quotation mark is stripped from the string.
o The second and third quotation mark are interpreted
as your request for one quotation mark (") as a
delimiter.
o The fourth quotation mark is stripped.
This results in one quotation mark being used as a
delimiter.
Furthermore, if you want to specify a quotation mark as
part of the delimited string, you must use two quotation
marks for each quotation mark that you want to appear in
the string. For example, Suffix = "**""**" causes Oracle
RMU to use a delimiter of **"**.
o Trim=option
If you specify the Trim=option keyword, leading and/or
trailing spaces area removed from each output field. Option
supports three keywords:
o TRAILING - trailing spaces will be trimmed from CHARACTER
and CHARACTER VARYING (VARCHAR) data that is unloaded.
This is the default setting if only the TRIM option is
specified.
o LEADING - leading spaces will be trimmed from CHARACTER and
CHARACTER VARYING (VARCHAR) data that is unloaded.
o BOTH - both leading and trailing spaces will be trimmed.
When the Record_Definition qualifier is used with load or unload
operations, and the Null option to the Delimited_Text option
is not specified, any null values stored in the rows of the
tables being loaded or unloaded are not preserved. Therefore,
if you want to preserve null values stored in tables and you are
moving data within the database or between databases, specify the
Null option with Delimited_Text option of the Record_Definition
qualifier.
34.1.4.14 – Reopen Count
Reopen_Count=n
The Reopen_Count=n qualifier allows you to specify how many
records are written to an output file. The output file will
be re-created (that is, a new version of the file will be
created) when the record count reaches the specified value.
The Reopen_Count=n qualifier is only valid when used with the
Record_Definition or Rms_Record_Def qualifiers.
34.1.4.15 – Rms Record Def
Rms_Record_Def=(File=name[,options])
Rms_Record_Def=(Path=name[,options])
Synonymous with the Record_Definition qualifier. See the
description of the Record_Definition qualifier.
34.1.4.16 – Row Count
Row_Count=n
Specifies that Oracle Rdb buffer multiple rows between the Oracle
Rdb server and the RMU Unload process. The default value for n
is 500 rows; however, this value should be adjusted based on
working set size and length of unloaded data. Increasing the row
count may reduce the CPU cost of the unload operation. For remote
databases, this may significantly reduce network traffic for
large volumes of data because the buffered data can be packaged
into larger network packets.
The minimum value you can specify for n is 1. The default row
size is the value specified for the Commit_Every qualifier or
500, whichever is smaller.
34.1.4.17 – Statistics Interval
Statistics_Interval=seconds
Specifies that statistics are to be displayed at regular
intervals so that you can evaluate the progress of the unload
operation.
The displayed statistics include:
o Elapsed time
o CPU time
o Buffered I/O
o Direct I/O
o Page faults
o Number of records unloaded since the last transaction was
committed
o Number of records unloaded so far in the current transaction
If the Statistics_Interval qualifier is specified, the seconds
parameter is required. The minimum value is 1. If the unload
operation completes successfully before the first time interval
has passed, you receive only an informational message on the
number of files unloaded. If the unload operation is unsuccessful
before the first time interval has passed, you receive error
messages and statistics on the number of records unloaded.
At any time during the unload operation, you can press Ctrl/T to
display the current statistics.
34.1.4.18 – Transaction Type
Transaction_Type[=(transaction_mode,options,...)]
Allows you to specify the transaction mode, isolation level, and
wait behavior for transactions.
Use one of the following keywords to control the transaction
mode:
o Automatic
When Transaction_Type=Automatic is specified, the transaction
type depends on the current database settings for snapshots
(enabled, deferred, or disabled), transaction modes available
to this user, and the standby status of the database.
Automatic mode is the default.
o Read_Only
Starts a Read_Only transaction.
o Exclusive
Starts a Read_Write transaction and reserves the table for
Exclusive_Read.
o Protected
Starts a Read_Write transaction and reserves the table for
Protected_Read.
o Shared
Starts a Read_Write transaction and reserves the table for
Shared_Read.
Use one of the following options with the keyword Isolation_
Level=[option] to specify the transaction isolation level:
o Read_Committed
o Repeatable_Read
o Serializable. Serializable is the default setting.
Refer to the SET TRANSACTION statement in the Oracle Rdb SQL
Reference Manual for a complete description of the transaction
isolation levels.
Specify the wait setting by using one of the following keywords:
o Wait
Waits indefinitely for a locked resource to become available.
Wait is the default behavior.
o Wait=n
The value you supply for n is the transaction lock timeout
interval. When you supply this value, Oracle Rdb waits n
seconds before aborting the wait and the RMU Unload session.
Specifying a wait timeout interval of zero is equivalent to
specifying Nowait.
o Nowait
Does not wait for a locked resource to become available.
34.1.4.19 – Virtual Fields
Virtual_Fields(=[No]Automatic,[No]Computed_By)
Novirtual_Fields
The Virtual_Fields qualifier unloads any AUTOMATIC or COMPUTED
BY fields as real data. This qualifier permits the transfer of
computed values to another application. It also permits unloading
through a view that is a union of tables or that is comprised
of columns from multiple tables. For example, if there are two
tables, EMPLOYEES and RETIRED_EMPLOYEES, the view ALL_EMPLOYEES
(a union of EMPLOYEES and RETIRED_EMPLOYEES tables) can be
unloaded.
The Novirtual_Fields qualifier is the default, which is
equivalent to the Virtual_Fields=[Noautomatic,Nocomputed_By)
qualifier.
If you specify the Virtual_Fields qualifier without a keyword,
all fields are unloaded, including COMPUTED BY and AUTOMATIC
table columns, and calculated VIEW columns.
If you specify the Virtual_Fields=(Automatic,Nocomputed_By)
qualifier or the Virtual_Fields=Nocomputed_By qualifier, data
is only unloaded from Automatic fields. If you specify the
Virtual_Fields=(Noautomatic,Computed_By) qualifier or the
Virtual_Fields=Noautomatic qualifier, data is only unloaded from
Computed_By fields.
34.1.5 – Usage Notes
o To use the RMU Unload command for a database, you must have
the RMU$UNLOAD privilege in the root file access control
list (ACL) for the database or the OpenVMS SYSPRV or BYPASS
privilege. You must also have the SQL SELECT privilege to the
table or view being unloaded.
o For tutorial information on the RMU Unload command, refer to
the Oracle Rdb Guide to Database Design and Definition.
o Detected asynchronous prefetch should be enabled to achieve
the best performance of this command. Beginning with Oracle
Rdb V7.0, by default, detected asynchronous prefetch is
enabled. You can determine the setting for your database by
issuing the RMU Dump command with the Header qualifier.
If detected asynchronous prefetch is disabled, and you do not
want to enable it for the database, you can enable it for your
Oracle RMU operations by defining the following logicals at
the process level:
$ DEFINE RDM$BIND_DAPF_ENABLED 1
$ DEFINE RDM$BIND_DAPF_DEPTH_BUF_CNT P1
P1 is a value between 10 and 20 percent of the user buffer
count.
o You can unload a table from a database structured under
one version of Oracle Rdb and load it into the same table
of a database structured under another version of Oracle
Rdb. For example, if you unload the EMPLOYEES table from
a mf_personnel database created under Oracle Rdb V6.0, you
can load the generated .unl file into an Oracle Rdb V7.0
database. Likewise, if you unload the EMPLOYEES table from
a mf_personnel database created under Oracle Rdb V7.0, you
can load the generated .unl file into an Oracle Rdb V6.1
database. This is true even for specially formatted binary
files (created with the RMU Unload command without the Record_
Definition qualifier). The earliest version into which you can
load a .unl file from another version is Oracle Rdb V6.0.
o The Fields qualifier can be used with indirect file
references. When you use the Fields qualifier with an indirect
file reference in the field list, the referenced file is
written to SYS$OUTPUT if you have used the DCL SET VERIFY
command. See the Indirect-Command-Files help entry for more
information.
o To view the contents of the specially structured .unl file
created by the RMU Unload command, use the RMU Dump Export
command.
o To preserve the null indicator in a load or unload operation,
use the Null option with the Record_Definition qualifier.
Using the Record_Definition qualifier without the Null option
replaces all null values with zeros; this can cause unexpected
results with computed-by columns.
o Oracle RMU does not allow you to unload a system table.
o The RMU Unload command recognizes character set information.
When you unload a table, RMU Unload transfers information
about the character set to the record definition file.
o When it creates the record definition file, the RMU Unload
command preserves any lowercase characters in table and column
names by allowing delimited identifiers. Delimited identifiers
are user-supplied names enclosed within quotation marks ("").
By default, RMU Unload changes any table or column (field)
names that you specify to uppercase. To preserve lowercase
characters, use delimited identifiers. That is, enclose the
names within quotation marks. In the following example, RMU
Unload preserves the uppercase and lowercase characters in
"Last_Name" and "Employees":
$ RMU/UNLOAD/FIELDS=("Last_name",FIRST_NAME) TEST "Employees" -
_$ TEST.UNL
NOTE
The data dictionary does not preserve the distinction
between uppercase and lowercase identifiers. If you use
delimited identifiers, you must be careful to ensure that
the record definition does not include objects with names
that are duplicates except for the case. For example,
the data dictionary considers the delimited identifiers
"Employee_ID" and "EMPLOYEE_ID" to be the same name.
o Oracle RMU does not support the multischema naming convention
and returns an error if you specify one. For example:
$ RMU/UNLOAD CORPORATE_DATA ADMINISTRATION.PERSONNEL.EMPLOYEES -
_$ OUTPUT.UNL
%RMU-E-OUTFILDEL, Fatal error, output file deleted
-RMU-F-RELNOTFND, Relation (ADMINISTRATION.PERSONNEL.EMPLOYEES) not found
When using a multischema database, you must specify the SQL
stored name for the database object.
For example, to find the stored name that corresponds to the
ADMINISTRATION.PERSONNEL.EMPLOYEES table in the corporate_data
database, issue an SQL SHOW TABLE command, as follows:
SQL> SHOW TABLE ADMINISTRATION.PERSONNEL.EMPLOYEES
Information for table ADMINISTRATION.PERSONNEL.EMPLOYEES
Stored name is EMPLOYEES
.
.
.
Then to unload the table, issue the following RMU Unload
command:
$ RMU/UNLOAD CORPORATE_DATA EMPLOYEES OUTPUT.UNL
o If the Transaction_Type qualifier is omitted, a Read_Only
transaction is started against the database. This behavior is
provided for backward compatibility with prior Rdb releases.
If the Transaction_Type qualifier is specified without a
transaction mode, the default value Automatic is used.
o If the database has snapshots disabled, Oracle Rdb defaults to
a READ WRITE ISOLATION LEVEL SERIALIZABLE transaction. Locking
may be reduced by specifying Transaction_Type=(Automatic), or
Transaction_Type=(Shared,Isolation_Level=Read_Committed).
o If you use a synonym to represent a table or a view, the RMU
Unload command translates the synonym to the base object
and processes the data as though the base table or view had
been named. This implies that the unload interchange files
(.UNL) or record definition files (.RRD) that contain the
table metadata will name the base table or view and not use
the synonym name. If the metadata is used against a different
database, you may need to use the Match_Name qualifier to
override this name during the RMU load process.
34.1.6 – Examples
Example 1
The following command unloads the EMPLOYEE_ID and LAST_NAME
column values from the EMPLOYEES table of the mf_personnel
database. The data is stored in names.unl.
$ RMU/UNLOAD -
_$ /FIELDS=(EMPLOYEE_ID, LAST_NAME) -
_$ MF_PERSONNEL EMPLOYEES NAMES.UNL
%RMU-I-DATRECUNL, 100 data records unloaded.
Example 2
The following command unloads the EMPLOYEES table from the
mf_personnel database and places the data in the RMS file,
names.unl. The names.rrd file contains the record structure
definitions for the data in names.unl.
$ RMU/UNLOAD/RECORD_DEFINITION=FILE=NAMES.RRD MF_PERSONNEL -
_$ EMPLOYEES NAMES.UNL
%RMU-I-DATRECUNL, 100 data records unloaded.
Example 3
The following command unloads the EMPLOYEE_ID and LAST_NAME
column values from the EMPLOYEES table of the mf_personnel
database and accepts the default values for delimiters, as shown
by viewing the names.unl file:
$ RMU/UNLOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME) -
-$ /RECORD_DEFINITION=(FILE=NAMES, FORMAT=DELIMITED_TEXT) -
-$ MF_PERSONNEL EMPLOYEES NAMES.UNL
%RMU-I-DATRECUNL, 100 data records unloaded.
$ !
$ ! TYPE the names.unl file to see the effect of the RMU Unload
$ ! command.
$ !
$ TYPE NAMES.UNL
"00164","Toliver "
"00165","Smith "
"00166","Dietrich "
"00167","Kilpatrick "
"00168","Nash "
.
.
.
Example 4
The following command unloads the EMPLOYEE_ID and LAST_NAME
column values from the EMPLOYEES table of the mf_personnel
database and specifies the asterisk (*) character as the string
to mark the beginning and end of each column (the prefix and
suffix string):
$ RMU/UNLOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME) -
_$ /RECORD_DEFINITION=(FILE=NAMES, -
_$ FORMAT=DELIMITED_TEXT, SUFFIX="*", -
_$ PREFIX="*") -
_$ MF_PERSONNEL EMPLOYEES NAMES.UNL
%RMU-I-DATRECUNL, 100 data records unloaded.
$ !
$ ! TYPE the names.unl file to see the effect of the RMU Unload
$ ! command.
$ !
$ TYPE NAMES.UNL
*00164*,*Toliver *
*00165*,*Smith *
*00166*,*Dietrich *
*00167*,*Kilpatrick *
*00168*,*Nash *
*00169*,*Gray *
*00170*,*Wood *
*00171*,*D'Amico *
.
.
.
Example 5
The following command unloads all column values from the
EMPLOYEES table of the mf_personnel database, and specifies the
Format=Text option of the Record_Definition qualifier. Oracle RMU
will convert all the data to printable text, as can be seen by
viewing the text_output.unl file:
$ RMU/UNLOAD/RECORD_DEFINITION=(FILE=TEXT_RECORD,FORMAT=TEXT) -
_$ MF_PERSONNEL EMPLOYEES TEXT_OUTPUT
%RMU-I-DATRECUNL, 100 data records unloaded.
$ !
$ ! TYPE the text_output.unl file to see the effect of the RMU Unload
$ ! command.
$ !
$ TYPE TEXT_OUTPUT.UNL
00164Toliver Alvin A146 Parnell Place
Chocorua NH03817M19470328000000001
00165Smith Terry D120 Tenby Dr.
Chocorua NH03817M19540515000000002
00166Dietrich Rick 19 Union Square
Boscawen NH03301M19540320000000001
.
.
.
Example 6
The following command unloads the EMPLOYEE_ID and LAST_NAME
column values from the EMPLOYEES table of the mf_personnel
database and requests that statistics be displayed on the
terminal at 2-second intervals:
$ RMU/UNLOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME) -
_$ /STATISTICS_INTERVAL=2 -
_$ MF_PERSONNEL EMPLOYEES NAMES.UNL
Example 7
The following example unloads a subset of data from the EMPLOYEES
table, using the following steps:
1. Create a temporary view on the EMPLOYEES table that includes
only employees who live in Massachusetts.
2. Use an RMU Unload command to unload the data from this view.
3. Delete the temporary view.
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> CREATE VIEW MA_EMPLOYEES
cont> (EMPLOYEE_ID,
cont> LAST_NAME,
cont> FIRST_NAME,
cont> MIDDLE_INITIAL,
cont> STATE,
cont> STATUS_CODE)
cont> AS SELECT
cont> E.EMPLOYEE_ID,
cont> E.LAST_NAME,
cont> E.FIRST_NAME,
cont> E.MIDDLE_INITIAL,
cont> E.STATE,
cont> E.STATUS_CODE
cont> FROM EMPLOYEES E
cont> WHERE E.STATE='MA';
SQL> COMMIT;
SQL> EXIT;
$ RMU/UNLOAD/RECORD_DEFINITION=(FILE=MA_EMPLOYEES,FORMAT=DELIMITED_TEXT) -
_$ MF_PERSONNEL MA_EMPLOYEES MA_EMPLOYEES.UNL
%RMU-I-DATRECUNL, 9 data records unloaded.
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> DROP VIEW MA_EMPLOYEES;
SQL> COMMIT;
Example 8
The following example shows that null values in blank columns
are not preserved unless the Null option is specified with the
Delimited_Text option of the Record_Definition qualifier:
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> --
SQL> -- Create the NULL_DATE table:
SQL> CREATE TABLE NULL_DATE
cont> (COL1 VARCHAR(5),
cont> DATE1 DATE,
cont> COL2 VARCHAR(5));
SQL> --
SQL> -- Store a row that does not include a value for the DATE1
SQL> -- column of the NULL_DATE table:
SQL> INSERT INTO NULL_DATE
cont> (COL1, COL2)
cont> VALUES ('first','last');
1 row inserted
SQL> --
SQL> COMMIT;
SQL> --
SQL> -- The previous SQL INSERT statement causes a null value to
SQL> -- be stored in NULL_DATE:
SQL> SELECT * FROM NULL_DATE;
COL1 DATE1 COL2
first NULL last
1 row selected
SQL> --
SQL> DISCONNECT DEFAULT;
SQL> EXIT;
$ !
$ ! In the following RMU Unload command, the Record_Definition
$ ! qualifier is used to unload the row with the NULL value, but
$ ! the Null option is not specified:
$ RMU/UNLOAD/RECORD_DEFINITION=(FILE=NULL_DATE,FORMAT=DELIMITED_TEXT) -
_$ MF_PERSONNEL NULL_DATE NULL_DATE
%RMU-I-DATRECUNL, 1 data records unloaded.
$ !
$ ! The null_date.unl file created by the previous unload
$ ! operation does not preserve the NULL value in the DATE1 column.
$ ! Instead, the Oracle Rdb default date value is used:
$ TYPE NULL_DATE.UNL
"first","1858111700000000","last"
$ !
$ ! This time, unload the row in NULL_DATE with the Null option to
$ ! the Record_Definition qualifier:
$ RMU/UNLOAD MF_PERSONNEL NULL_DATE NULL_DATE -
_$ /RECORD_DEFINITION=(FILE=NULL_DATE.RRD, FORMAT=DELIMITED_TEXT, NULL="*")
%RMU-I-DATRECUNL, 1 data records unloaded.
$ !
$ TYPE NULL_DATE.UNL
"first",*,"last "
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> --
SQL> -- Delete the existing row from NULL_DATE:
SQL> DELETE FROM NULL_DATE;
1 row deleted
SQL> --
SQL> COMMIT;
SQL> EXIT;
$ !
$ ! Load the row that was unloaded back into the table,
$ ! using the null_date.unl file created by the
$ ! previous RMU Unload command:
$ RMU/LOAD MF_PERSONNEL /RECORD_DEFINITION=(FILE=NULL_DATE.RRD, -
_$ FORMAT=DELIMITED_TEXT, NULL="*") NULL_DATE NULL_DATE
%RMU-I-DATRECREAD, 1 data records read from input file.
%RMU-I-DATRECSTO, 1 data records stored.
$ !
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL';
SQL> --
SQL> -- Display the row stored in NULL_DATE.
SQL> -- The NULL value stored in the data row
SQL> -- was preserved by the load and unload operations:
SQL> SELECT * FROM NULL_DATE;
COL1 DATE1 COL2
first NULL last
1 row selected
Example 9
The following example demonstrates the use of the Null="" option
of the Record_Definition qualifier to signal to Oracle RMU that
any data that is an empty string in the .unl file (as represented
by two commas with no space separating them) should have the
corresponding column in the database flagged as NULL.
The first part of this example shows the contents of the .unl
file and the RMU Load command used to load the .unl file. The
terminator for each record in the .unl file is the number sign
(#). The second part of this example unloads unloads the data
and specifies that any columns that are flagged as NULL should be
represented in the output file with an asterisk.
"90021","ABUSHAKRA","CAROLINE","A","5 CIRCLE STREET",,
"CHELMSFORD", "MA", "02184", "1960061400000000"#
"90015","BRADFORD","LEO","B","4 PLACE STREET",, "NASHUA","NH",
"03030", "1949051800000000"#
$ !
$ RMU/LOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
_$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2, -
_$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
_$ /RECORD_DEFINITION=(FILE= EMPLOYEES.RRD, -
_$ FORMAT=DELIMITED_TEXT, -
_$ TERMINATOR="#", -
_$ NULL="") -
_$ MF_PERSONNEL EMPLOYEES EMPLOYEES.UNL
%RMU-I-DATRECREAD, 2 data records read from input file.
%RMU-I-DATRECSTO, 2 data records stored.
$ !
$ ! Unload this data first without specifying the Null option:
$ RMU/UNLOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
_$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2, -
_$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
_$ /RECORD_DEFINITION=(FILE= EMPLOYEES.RRD, -
_$ FORMAT=DELIMITED_TEXT, -
_$ TERMINATOR="#") -
_$ MF_PERSONNEL EMPLOYEES EMPLOYEES.UNL
%RMU-I-DATRECUNL, 102 data records unloaded.
$ !
$ ! The ADDRESS_DATA_2 field appears as a quoted string:
$ TYPE EMPLOYEES.UNL
.
.
.
"90021","ABUSHAKRA ","CAROLINE ","A","5 CIRCLE STREET ","
","CHELMSFORD ","MA","02184","1960061400000000"#
$ !
$ ! Now unload the data with the Null option specified:
$ RMU/UNLOAD/FIELDS=(EMPLOYEE_ID, LAST_NAME, FIRST_NAME, -
_$ MIDDLE_INITIAL, ADDRESS_DATA_1, ADDRESS_DATA_2, -
_$ CITY, STATE, POSTAL_CODE, BIRTHDAY) -
_$ /RECORD_DEFINITION=(FILE= EMPLOYEES.RRD, -
_$ FORMAT=DELIMITED_TEXT, -
_$ TERMINATOR="#", -
_$ NULL="*") -
_$ MF_PERSONNEL EMPLOYEES EMPLOYEES.UNL
%RMU-I-DATRECUNL, 102 data records unloaded.
$ !
$ ! The value for ADDRESS_DATA_2 appears as an asterisk:
$ !
$ TYPE EMPLOYEES.UNL
.
.
.
"90021","ABUSHAKRA ","CAROLINE ","A","5 CIRCLE STREET ",*,
"CHELMSFORD ","MA","02184","1960061400000000"#
Example 10
The following example specifies a transaction for the RMU Unload
command equivalent to the SQL command SET TRANSACTION READ WRITE
WAIT 36 RESERVING table1 FOR SHARED READ;
$ RMU/UNLOAD-
/TRANSACTION_TYPE=(SHARED,ISOLATION=REPEAT,WAIT=36)-
SAMPLE.RDB-
TABLE1-
TABLE.DAT
Example 11
The following example specifies the options that were the default
transaction style in prior releases.
$ RMU/UNLOAD-
/TRANSACTION_TYPE=(READ_ONLY,ISOLATION_LEVEL=SERIALIZABLE)-
SAMPLE.RDB-
TABLE1-
TABLE1.DAT
Example 12
If the database currently has snapshots deferred, it may be more
efficient to start a read-write transaction with isolation level
read committed. This allows the transaction to start immediately
(a read-only transaction may stall), and the selected isolation
level keeps row locking to a minimum.
$ RMU/UNLOAD-
/TRANSACTION_TYPE=(SHARED_READ,ISOLATION=READ_COMMITTED)-
SAMPLE.RDB-
TABLE1-
TABLE1.DAT
Using a transaction type of automatic adapts to different
database settings.
$ RMU/UNLOAD-
/TRANSACTION_TYPE=(AUTOMATIC)-
SAMPLE.RDB-
TABLE1-
TABLE1.DAT
Example 13
The following example shows the output from the flags STRATEGY
and ITEM_LIST which indicates that the Optimize qualifier
specified that sequential access be used, and also that Total_
Time is used as the default optimizer preference.
$ DEFINE RDMS$SET_FLAGS "STRATEGY,ITEM_LIST"
$ RMU/UNLOAD/OPTIMIZE=SEQUENTIAL_ACCESS PERSONNEL EMPLOYEES E.DAT
.
.
.
~H Request Information Item List: (len=11)
0000 (00000) RDB$K_SET_REQ_OPT_PREF "0"
0005 (00005) RDB$K_SET_REQ_OPT_SEQ "1"
000A (00010) RDB$K_INFO_END
Get Retrieval sequentially of relation EMPLOYEES
%RMU-I-DATRECUNL, 100 data records unloaded.
Example 14
AUTOMATIC columns are evaluated during INSERT and UPDATE
operations for a table; for instance, they may record the
timestamp for the last operation. If the table is being
reorganized, it may be necessary to unload the data and reload it
after the storage map and indexes for the table are re-created,
yet the old auditing data must remain the same.
Normally, the RMU Unload command does not unload columns marked
as AUTOMATIC; you must use the Virtual_Fields qualifier with the
keyword Automatic to request this action.
$ rmu/unload/virtual_fields=(automatic) payroll_db people people.unl
Following the restructure of the database, the data can be
reloaded. If the target columns are also defined as AUTOMATIC,
then the RMU Load process will not write to those columns. You
must use the Virtual_Fields qualifier with the keyword Automatic
to request this action.
$ rmu/load/virtual_fields=(automatic) payroll_db people people.unl
Example 15
This example shows the action of the Delete_Rows qualifier.
First, SQL is used to display the count of the rows in the table.
The file PEOPLE.COLUMNS is verified (written to SYS$OUTPUT) by
the RMU Unload command.
$ define sql$database db$:scratch
$ sql$ select count (*) from people;
100
1 row selected
$ rmu/unload/fields="@people.columns" -
sql$database -
/record_definition=(file:people,format:delimited) -
/delete_rows -
people -
people2.dat
EMPLOYEE_ID
LAST_NAME
FIRST_NAME
MIDDLE_INITIAL
SEX
BIRTHDAY
%RMU-I-DATRECERA, 100 data records erased.
%RMU-I-DATRECUNL, 100 data records unloaded.
A subsequent query shows that the rows have been deleted.
$ sql$ select count (*) from people;
0
1 row selected
Example 16
The following example shows the output from the RMU Unload
command options for XML support. The two files shown in the
example are created by this RMU Unload command:
$ rmu/unload -
/record_def=(format=xml,file=work_status) -
mf_personnel -
work_status -
work_status.xml
Output WORK_STATUS.DTD file
<?xml version="1.0"?>
<!-- RMU Unload for Oracle Rdb V7.1-00 -->
<!-- Generated: 16-MAR-2001 22:26:47.30 -->
<!ELEMENT WORK_STATUS (RMU_ROW*)>
<!ELEMENT RMU_ROW (
STATUS_CODE,
STATUS_NAME,
STATUS_TYPE
)>
<!ELEMENT STATUS_CODE (#PCDATA)>
<!ELEMENT STATUS_NAME (#PCDATA)>
<!ELEMENT STATUS_TYPE (#PCDATA)>
<!ELEMENT NULL (EMPTY)>
Output WORK_STATUS.XML file
<?xml version="1.0"?>
<!-- RMU Unload for Oracle Rdb V7.1-00 -->
<!-- Generated: 16-MAR-2001 22:26:47.85 -->
<!DOCTYPE WORK_STATUS SYSTEM "work_status.dtd">
<WORK_STATUS>
<RMU_ROW>
<STATUS_CODE>0</STATUS_CODE>
<STATUS_NAME>INACTIVE</STATUS_NAME>
<STATUS_TYPE>RECORD EXPIRED</STATUS_TYPE>
</RMU_ROW>
<RMU_ROW>
<STATUS_CODE>1</STATUS_CODE>
<STATUS_NAME>ACTIVE </STATUS_NAME>
<STATUS_TYPE>FULL TIME </STATUS_TYPE>
</RMU_ROW>
<RMU_ROW>
<STATUS_CODE>2</STATUS_CODE>
<STATUS_NAME>ACTIVE </STATUS_NAME>
<STATUS_TYPE>PART TIME </STATUS_TYPE>
</RMU_ROW>
</WORK_STATUS>
<!-- 3 rows unloaded -->
Example 17
The following example shows that if the Flush=On_Commit qualifier
is specified, the value for the Commit_Every qualifier must be
equal to or a multiple of the Row_Count value so the commits
of unload transactions occur after the internal RMS buffers are
flushed to the unload file. This prevents loss of data if an
error occurs.
$RMU/UNLOAD/ROW_COUNT=5/COMMIT_EVERY=2/FLUSH=ON_COMMIT MF_PERSONNEL -
_$ EMPLOYEES EMPLOYEES
%RMU-F-DELROWCOM, For DELETE_ROWS or FLUSH=ON_COMMIT the COMMIT_EVERY value must
equal or be a multiple of the ROW_COUNT value.
The COMMIT_EVERY value of 2 is not equal to or a multiple of the ROW_COUNT value
of 5.
%RMU-F-FTL_UNL, Fatal error for UNLOAD operation at 27-Oct-2005 08:55:14.06
Example 18
The following examples show that the unload file and record
definition files are not deleted on error if the Noerror_Delete
qualifier is specified and that these files are deleted on error
if the Error_Delete qualifier is specified. If the unload file is
empty when the error occurs, it will be deleted.
$RMU/UNLOAD/NOERROR_DELETE/ROW_ACOUNT=50/COMMIT_EVERY=50 MF_PERSONNEL -
_$ EMPLOYEES EMPLOYEES.UNL
%RMU-E-OUTFILNOTDEL, Fatal error, the output file is not deleted but may not
be useable,
50 records have been unloaded.
-COSI-F-WRITERR, write error
-RMS-F-FUL, device full (insufficient space for allocation)
$RMU/UNLOAD/ERROR_DELETE/ROW_COUNT=50/COMMIT_EVERY=50 MF_PERSONNEL -
_$ EMPLOYEES EMPLOYEES.UNL
%RMU-E-OUTFILDEL, Fatal error, output file deleted
-COSI-F-WRITERR, write error
-RMS-F-FUL, device full (insufficient space for allocation)
Example 19
The following example shows the FORMAT=CONTROL option. This
command creates a file EMP.CTL (the SQL*Loader control file)
and EMPLOYEES.DAT in a portable format to be loaded.
$ RMU/UNLOAD/RECORD_DEFINITION=(FORMAT=CONTROL,FILE=EMP) -
SQL$DATABASE -
EMPLOYEES -
EMPLOYEES
Example 20
The following shows an example of using the COMPRESSION qualifier
with the RMU Unload command.
$ RMU/UNLOAD/COMPRESS=LZW/DEBUG=TRACE COMPLETE_WORKS COMPLETE_WORKS
COMPLETE_WORKS
Debug = TRACE
Compression = LZW
* Synonyms are not enabled
Unloading Blob columns.
Row_Count = 500
Message buffer: Len: 54524
Message buffer: Sze: 109, Cnt: 500, Use: 31 Flg: 00000000
** compress data: input 2700 output 981 deflate 64%
** compress TEXT_VERSION : input 4454499 output 1892097 deflate 58%
** compress PDF_VERSION : input 274975 output 317560 deflate -15%
%RMU-I-DATRECUNL, 30 data records unloaded.
Example 21
The following shows an example of using the COMPRESSION qualifier
with RMU Unload and using the EXCLUDE_LIST option to avoid
attempting to compress data that does not compress.
$ RMU/UNLAOD/COMPRESS=(LZW,EXCLUDE_LIST:PDF_VERSION)/DEBUG=TRACE COMPLETE_WORKS
COMPLETE_WORKS COMPLETE_WORKS
Debug = TRACE
Compression = LZW
Exclude_List:
Exclude column PDF_VERSION
* Synonyms are not enabled
Unloading Blob columns.
Row_Count = 500
Message buffer: Len: 54524
Message buffer: Sze: 109, Cnt: 500, Use: 31 Flg: 00000000
** compress data: input 2700 output 981 deflate 64%
** compress TEXT_VERSION : input 4454499 output 1892097 deflate 58%
%RMU-I-DATRECUNL, 30 data records unloaded.
34.2 – After Journal
Allows you to extract added, modified, committed, and deleted
record contents from committed transactions from specified tables
in one or more after-image journal files.
34.2.1 – Description
The RMU Unload After_Journal command translates the binary data
record contents of an after-image journal (.aij) file into an
output file. Data records for the specified tables for committed
transactions are extracted to an output stream (file, device,
or application callback) in the order that the transactions were
committed.
Before you use the RMU Unload After_Journal command, you must
enable the database for LogMiner extraction. Use the RMU Set
Logminer command to enable the LogMiner for Rdb feature for the
database. Before you use the RMU Unload After_Journal command
with the Continuous qualifier, you must enable the database for
Continuous LogMiner extraction. See Set Logminer help topic for
more information.
Data records extracted from the .aij file are those records that
transactions added, modified, or deleted in base database tables.
Index nodes, database metadata, segmented strings (BLOB), views,
COMPUTED BY columns, system relations, and temporary tables
cannot be unloaded from after-image journal files.
For each transaction, only the final content of a record
is extracted. Multiple changes to a single record within a
transaction are condensed so that only the last revision of the
record appears in the output stream. You cannot determine which
columns were changed in a data record directly from the after-
image journal file. In order to determine which columns were
changed, you must compare the record in the after-image journal
file with a previous record.
The database used to create the after-image journal files being
extracted must be available during the RMU Unload After_Journal
command execution. The database is used to obtain metadata
information (such as table names, column counts, record version,
and record compression) needed to extract data records from the
.aij file. The database is read solely to load the metadata
and is then detached. Database metadata information can also
be saved and used in a later session. See the Save_MetaData and
Restore_MetaData qualifiers for more information.
If you use the Continuous qualifier, the database must be opened
on the node where the Continuous LogMiner process is running. The
database is always used and must be available for both metadata
information and for access to the online after-image journal
files. The Save_MetaData and Restore_MetaData qualifiers are not
permitted with the Continuous qualifier.
When one or more .aij files and the Continuous qualifier are
both specified on the RMU Unload After_Journal command line,
it is important that no .aij backup operations occur until the
Continuous LogMiner process has transitioned to online mode
(where the active online .aij files are being extracted). If you
are using automatic .aij backups and wish to use the Continuous
LogMiner feature, Oracle recommends that you consider disabling
the automatic backup feature (ABS) and use manual .aij backups
so that you can explicitly control when .aij backup operations
occur.
The after-image journal file or files are processed sequentially.
All specified tables are extracted in one pass through the
after-image journal file.
As each transaction commit record is processed, all modified and
deleted records for the specified tables are sorted to remove
duplicates. The modified and deleted records are then written
to the output streams. Transactions that were rolled back are
ignored. Data records for tables that are not being extracted are
ignored. The actual order of output records within a transaction
is not predictable.
In the extracted output, records that were modified or added are
returned as being modified. It is not possible to distinguish
between inserted and updated records in the output stream.
Deleted (erased) records are returned as being deleted. A
transaction that modifies and deletes a record generates only
a deleted record. A transaction that adds a new record to
the database and then deletes it within the same transaction
generates only a deleted record.
The LogMiner process signals that a row has been deleted by
placing a D in the RDB$LM_ACTION field. The contents of the
row at the instant before the delete operation are recorded
in the user fields of the output record. If a row was modified
several times within a transaction before being deleted, the
output record contains only the delete indicator and the results
of the last modify operation. If a row is inserted and deleted
in the same transaction, only the delete record appears in the
output.
Records from multiple tables can be output to the same or to
different destination streams. Possible output destination
streams include the following:
o File
o OpenVMS Mailbox
o OpenVMS Pipe
o Direct callback to an application through a run-time activated
shareable image
Refer to the Using_LogMiner_for_Rdb help topic for more
information about using the LogMiner for Rdb feature.
34.2.2 – Format
(B)0[mRMU/Unload/After_Journal root-file-spec aij-file-name
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/Before=date-time x None
/Continuous x /NoContinuous
/Extend_Size=integer x /Extend_Size=1000
/Format=options x See description
/Ignore=Old_Version[=table-list] x /Ignore=Old_Version=all
/Include=Action=(include-type) x Include=Action=
x (NoCommit,Modify,Delete)
/IO_Buffers=integer x /IO_Buffers=2
/[No]Log x Current DCL verify value
/Options=options-list x See description
/Order_AIJ_files x /NoOrder_aij_files
/Output=file-spec x /Output=SYS$OUTPUT
/Parameter=character-strings x None
/Quick_Sort_Limit=integer x /Quick_Sort_Limit=5000
/Restart=(restart-point) x None
/Restore_Metadata=file-spec x None
(B)0[m/Save_Metadata=file-spec x None
/Select=selection-type x /Select=Commit_Transaction
/Since=date-time x None
/Sort_Workfiles=integer x /Sort_Workfiles=2
/Statistics_Interval=integer x See description
/[No]Symbols x /Symbols
/Table=(Name=table-name, x See description
[table-options ...]) x None
[No]Trace x /Notrace
34.2.3 – Parameters
34.2.3.1 – root-file-spec
The root file specification of the database for the after-image
journal file from which tables will be unloaded. The default file
extension is .rdb.
The database must be the same actual database that was used to
create the after-image journal files. The database is required
so that the table metadata (information about data) is available
to the RMU Unload After_Journal command. In particular, the names
and relation identification of valid tables within the database
are required along with the number of columns in the table and
the compression information for the table in various storage
areas.
The RMU Unload After_Journal process attaches to the database
briefly at the beginning of the extraction operation in order to
read the metadata. Once the metadata has been read, the process
disconnects from the database for the remainder of the operation
unless the Continuous qualifier is specified. The Continuous
qualifier indicates that the extraction operation is to run non-
stop, and the process remains attached to the database.
34.2.3.2 – aij-file-name
One or more input after-image journal backup files to be used
as the source of the extraction operation. Multiple journal
files can be extracted by specifying a comma-separated list
of file specifications. Oracle RMU supports OpenVMS wildcard
specifications (using the * and % characters) to extract a
group of files. A file specification beginning with the at
(@) character refers to an options file containing a list of
after-image journal files (rather than the file specification
of an after-image journal itself). If you use the at character
syntax, you must enclose the at character and the file name in
double quotation marks (for example, specify aij-file-name as
"@files.opt"). The default file extension is .aij.
34.2.4 – Command Qualifiers
34.2.4.1 – Before
Before=date-time
Specifies the ending time and date for transactions to be
extracted. Based on the Select qualifier, transactions that
committed or started prior to the specified Before date are
selected. Information changed due to transactions that committed
or started after the Before date is not included in the output.
34.2.4.2 – Continuous
Continuous
Nocontinuous
Causes the LogMiner process to attach to the database and begin
extracting records in "near-real" time. When the Continuous
qualifier is specified, the RMU Unload After_Journal command
extracts records from the online after-image journal files of the
database until it is stopped via an external source (for example,
Ctrl/y, STOP/ID, $FORCEX, or database shutdown).
A database must be explicitly enabled for the Continuous LogMiner
feature. To enable the Continuous LogMiner feature, use the RMU
Set Logminer command with the Enable and Continuous qualifiers;
to disable use of the Continuous LogMiner feature, use the RMU
Set Logminer command with the Enable and Nocontinuous qualifiers.
The output from the Continuous LogMiner process is a continuous
stream of information. The intended use of the Continuous
LogMiner feature is to write the changes into an OpenVMS
mailbox or pipe, or to call a user-supplied callback routine.
Writing output to a disk file is completely functional with the
Continuous LogMiner feature, however, no built-in functionality
exists to prevent the files from growing indefinitely.
It is important that the callback routine or processing of
the mailbox be very responsive. If the user-supplied callback
routine blocks, or if the mailbox is not being read fast enough
and fills, the RMU Unload After_Journal command will stall. The
Continuous LogMiner process prevents backing up the after-image
journal that it is currently extracting along with all subsequent
journals. If the Continuous LogMiner process is blocked from
executing for long enough, it is possible that all available
journals will fill and will not be backed up.
When a database is enabled for the Continuous LogMiner feature,
an AIJ "High Water" lock (AIJHWM) is utilized to help coordinate
and maintain the current .aij end-of-file location. The lock
value block for the AIJHWM lock contains the location of the
highest written .aij block. The RMU Unload After_Journal command
with the Continuous qualifier polls the AIJHWM lock to determine
if data has been written to the .aij file and to find the highest
written block. If a database is not enabled for the Continuous
LogMiner feature, there is no change in locking behavior; the
AIJHWM lock is not maintained and thus the Continuous qualifier
of the RMU Unload After_Journal command is not allowed.
In order to maintain the .aij end-of-file location lock,
processes that write to the after-image journal file must use
the lock to serialize writing to the journal. When the Continuous
LogMiner feature is not enabled, processes instead coordinate
allocating space in the after-image journal file and can write
to the file without holding a lock. The Continuous LogMiner
process requires that the AIJHWM lock be held during the .aij
I/O operation. In some cases, this can reduce overall throughput
to the .aij file as it serves to reduce multiple over-lapped I/O
write operations by multiple processes.
The Save_Metadata and Restore_Metadata qualifiers are
incompatible with the Continuous qualifier.
34.2.4.3 – Extend Size
Extend_size=integer
Specifies the file allocation and extension quantity for output
data files. The default extension size is 1000 blocks. Using a
larger value can help reduce output file fragmentation and can
improve performance when large amounts of data are extracted.
34.2.4.4 – Format
Format=options
If the Format qualifier is not specified, Oracle RMU outputs data
to a fixed-length binary flat file.
The format options are:
o Format=Binary
If you specify the Format=Binary option, Oracle RMU does not
perform any data conversion; data is output in a flat file
format with all data in the original binary state.
Output Fields describes the output fields and data types of an
output record in Binary format.
Table 19 Output Fields
Byte
Field Name Data Type LengthDescription
ACTION CHAR (1) 1 Indicates record state.
"M" indicates an insert or
modify action. "D" indicates a
delete action. "E" indicates
stream end-of-file (EOF)
when a callback routine is
being used. "P" indicates
a value from the command
line Parameter qualifier
when a callback routine is
being used (see Parameter
qualifier). "C" indicates
transaction commit information
when the Include=Action=Commit
qualifier is specified.
RELATION_ CHAR (31) 31 Table name. Space padded to 31
NAME characters.
RECORD_TYPE INTEGER 4 The Oracle Rdb internal
(Longword) relation identifier.
DATA_LEN SMALLINT 2 Length, in bytes, of the data
(Word) record content.
NBV_LEN SMALLINT 2 Length, in bits, of the null
(Word) bit vector content.
DBK BIGINT 8 Records logical database key.
(Quadword) The database key is a 3-field
structure containing a 16-
bit line number, a 32-bit
page number and a 16-bit area
number.
START_TAD DATE VMS 8 Date/time of the start of the
(Quadword) transaction.
COMMIT_TAD DATE VMS 8 Date/time of the commitment of
(Quadword) the transaction.
TSN BIGINT 8 Transaction sequence number of
(Quadword) the transaction that performed
the record operation.
RECORD_ SMALLINT 2 Record version.
VERSION (Word)
Record Data Varies Actual data record field
contents.
Record NBV BIT VECTOR Null bit vector. There is
(array of one bit for each field in the
bits) data record. If a bit value
is 1, the corresponding field
is NULL; if a bit value is
0, the corresponding field
is not NULL and contains an
actual data value. The null
bit vector begins on a byte
boundary. Any extra bits in
the final byte of the vector
after the final null bit are
unused.
o Format=Dump
If you specify the Format=Dump option, Oracle RMU produces an
output format suitable for viewing. Each line of Dump format
output contains the column name (including LogMiner prefix
columns) and up to 200 bytes of the column data. Unprintable
characters are replaced with periods (.), and numbers and
dates are converted to text. NULL columns are indicated
with the string "NULL". This format is intended to assist
in debugging; the actual output contents and formatting will
change in the future.
o Format=Text
If you specify the Format=Text option, Oracle RMU converts
all data to printable text in fixed-length columns before
unloading it. VARCHAR(n) strings are padded with blanks when
the specified string has fewer characters than n so that the
resulting string is n characters long.
o Format=(Delimited_Text [,delimiter-options])
If you specify the Format=Delimited_Text option, Oracle RMU
applies delimiters to all data before unloading it.
DATE VMS dates are output in the collatable time format, which
is yyyymmddhhmmsscc. For example, March 20, 1993 is output as:
1993032000000000.
Delimiter options are:
- Prefix=string
Specifies a prefix string that begins any column value in
the ASCII output file. If you omit this option, the column
prefix is a quotation mark (").
- Separator=string
Specifies a string that separates column values of a row.
If you omit this option, the column separator is a single
comma (,).
- Suffix=string
Specifies a suffix string that ends any column value in
the ASCII output file. If you omit this option, the column
suffix is a quotation mark (").
- Terminator=string
Specifies the row terminator that completes all the column
values corresponding to a row. If you omit this option, the
row terminator is the end of the line.
- Null=string
Specifies a string that, when found in the database column,
is unloaded as "NULL" in the output file.
The Null option can be specified on the command line as any
one of the following:
* A quoted string
* An empty set of double quotes ("")
* No string
The string that represents the null character must be
quoted on the Oracle RMU command line. You cannot specify a
blank space or spaces as the null character. You cannot use
the same character for the Null value and other Delimited_
Text options.
NOTE
The values for each of the strings specified in the
delimiter options must be enclosed within quotation
marks. Oracle RMU strips these quotation marks while
interpreting the values. If you want to specify a
quotation mark (") as a delimiter, specify a string
of four quotation marks. Oracle RMU interprets four
quotation marks as your request to use one quotation
mark as a delimiter. For example, Suffix = """".
Oracle RMU reads these quotation marks as follows:
o The first quotation mark is stripped from the string.
o The second and third quotation mark are interpreted
as your request for one quotation mark (") as a
delimiter.
o The fourth quotation mark is stripped.
This results in one quotation mark being used as a
delimiter.
Furthermore, if you want to specify a quotation mark as
part of the delimited string, you must use two quotation
marks for each quotation mark that you want to appear in
the string. For example, Suffix = "**""**" causes Oracle
RMU to use a delimiter of **"**.
34.2.4.5 – Ignore
Ignore=Old_Version[=table-list]
Specifies optional conditions or items to ignore.
The RMU Unload After_Journal command treats non-current record
versions in the AIJ file as a fatal error condition. That is,
attempting to extract a record that has a record version not the
same as the table's current maximum version results in a fatal
error.
There are, however, some very rare cases where a verb rollback
of a modification of a record may result in an old version of a
record being written to the after-image journal even though the
transaction did not actually complete a successful modification
to the record. The RMU Unload After_Journal command detects the
old record version and aborts with a fatal error in this unlikely
case.
When the Ignore=Old_Version qualifier is present, the RMU Unload
After_Journal command displays a warning message for each
record that has a non-current record version and the record
is not written to the output stream. The Old_Version qualifier
accepts an optional list of table names to indicate that only the
specified tables are permitted to have non-current record version
errors ignored.
34.2.4.6 – Include
Include=Action=include-type
Specifies if deleted or modified records or transaction commit
information is to be extracted from the after-image journal. The
following keywords can be specified:
o Commit
NoCommit
If you specify Commit, a transaction commit record is
written to each output stream as the final record for each
transaction. The commit information record is written to
output streams after all other records for the transaction
have been written. The default is NoCommit.
Because output streams are created with a default file name
of the table being extracted, it is important to specify a
unique file name on each occurrence of the output stream.
The definition of "unique" is such that when you write to a
non-file-oriented output device (such as a pipe or mailbox),
you must be certain to specify a specific file name on each
output destination. This means that rather than specifying
Output=MBA1234: for each output stream, you should use
Output=MBA1234:MBX, or any file name that is the same on all
occurrences of MBA1234:.
Failure to use a specific file name can result in additional,
and unexpected, commit records being returned. However, this
is generally a restriction only when using a stream-oriented
output device (as opposed to a disk file).
The binary record format is based on the standard LogMiner
output format. However, some fields are not used in the commit
action record. The binary format and contents of this record
are shown in Commit Record Contents. This record type is
written for all output data formats.
Table 20 Commit Record Contents
Length (in
Field bytes) Contents
ACTION 1 "C"
RELATION 31 Zero
RECORD_TYPE 4 Zero
DATA_LEN 2 Length of RM_TID_LEN, AERCP_LEN, RM_
TID, AERCP
NBV_LEN 2 Zero
TID 4 Transaction (Attach) ID
PID 4 Process ID
START_TAD 8 Transaction Start Time/Date
COMMIT_TAD 8 Transaction Commit Time/Date
TSN 8 Transaction ID
RM_TID_LEN 4 Length of the Global TID
AERCP_LEN 4 Length of the AERCP information
RM_TID RM_TID_LEN Global TID
AERCP AERCP_LEN Restart Control Information
RDB$LM_ 12 USERNAME
USERNAME
When the original transaction took part in a distributed,
two-phase transaction, the RM_TID component is the Global
transaction manager (XA or DDTM) unique transaction ID.
Otherwise, this field contains binary zeroes.
The AIJ Extract Recovery Control Point (AERCP) information is
used to uniquely identify this transaction within the scope
of the database and after-image journal files. It contains
the .aij sequence number, VBN and TSN of the last "Micro Quiet
Point", and is used by the Continuous LogMiner process to
restart a particular point in the journal sequence.
o Delete
NoDelete
If you specify Delete, pre-deletion record contents are
extracted from the aij file. If you specify NoDelete, no
pre-deletion record contents are extracted. The default is
Delete.
o Modify
NoModify
If you specify Modify, modified or added record contents are
extracted from the .aij file. If you specify NoModify, then no
modified or added record contents are extracted. The default
is Modify.
34.2.4.7 – IO Buffers
IO_Buffers=integer
Specifies the number of I/O buffers used for output data files.
The default number of buffers is two, which is generally
adequate. With sufficiently fast I/O subsystem hardware,
additional buffers may improve performance. However, using a
larger number of buffers will also consume additional virtual
memory and process working set.
34.2.4.8 – Log
Log
Nolog
Specifies that the extraction of the .aij file is be reported
to SYS$OUTPUT or the destination specified with the Output
qualifier. When activity is logged, the output from the Log
qualifier provides the number of transactions committed or rolled
back. The default is the setting of the DCL VERIFY flag, which is
controlled by the DCL SET VERIFY command.
34.2.4.9 – Options
Options=options-list
The following options can be specified:
o File=file-spec
An options file contains a list of tables and output
destinations. The options file can be used instead of, or
along with, the Table qualifier to specify the tables to be
extracted. Each line of the options file must specify a table
name prefixed with "Table=". After the table name, the output
destination is specified as either "Output=", or "Callback_
Module=" and "Callback_Routine=", for example:
TABLE=tblname,OUTPUT=outfile
TABLE=tblname,CALLBACK_MODULE=image,CALLBACK_ROUTINE=routine
You can use the Record_Definition=file-spec option from the
Table qualifier to create a record definition file for the
output data. The default file type is .rrd; the default file
name is the name of the table.
You can use the Table_Definition=file-spec option from
the Table qualifier to create a file that contains an SQL
statement that creates a table to hold transaction data. The
default file type is .sql; the default file name is the name
of the table.
Each option in the Options=File qualifier must be fully
specified (no abbreviations are allowed) and followed with
an equal sign (=) and a value string. The value string must
be followed by a comma or the end of a line. Continuation
lines can be specified by using a trailing dash. Comments are
indicated by using the exclamation point (!) character.
You can use the asterisk (*) and the percent sign (%)
wildcard characters in the table name specification to select
all tables that satisfy the components you specify. The
asterisk matches zero or more characters; the percent sign
matches a single character.
For table name specifications that contain wild card
characters, if the first character of the string is a pound
sign (#), the wildcard specification is changed to a "not
matching" comparison. This allows exclusion of tables based
on a wildcard specification. The pound sign designation is
only evaluated when the table name specification contains an
asterisk or percent sign.
For example, a table name specification of "#FOO%" indicates
that all table names that are four characters long and do not
start with the string "FOO" are to be selected.
o Shared_Read
Specifies that the input after-image journal backup files are
to be opened with an RMS shared locking specification.
o Dump
Specifies that the contents of an input metadata file are to
be formatted and displayed. Typically, this information is
used as a debugging tool.
34.2.4.10 – Order AIJ Files
Order_AIJ_Files
NoOrder_AIJ_Files
By default, after-image journal files are processed in the order
that they are presented to the RMU Unload After_Journal command.
The Order_AIJ_Files qualifier specifies that the input after-
image journal files are to be processed in increasing order by
sequence number. This can be of benefit when you use wildcard (*
or %) processing of a number of input files. The .aij files are
each opened, the first block is read (to determine the sequence
number), and the files are closed prior to the sorting operation.
34.2.4.11 – Output
Output=file-spec
Redirects the log and trace output (selected with the Log and
Trace qualifiers) to the named file. If this qualifier is not
specified, the output generated by the Log and Trace qualifiers,
which can be voluminous, is displayed to SYS$OUTPUT.
34.2.4.12 – Parameter
Parameter=character-strings
Specifies one or more character strings that are concatenated
together and passed to the callback routine upon startup.
For each table that is associated with a user-supplied callback
routine, the callback routine is called with two parameters: the
length of the Parameter record and a pointer to the Parameter
record. The binary format and contents of this record are shown
in Parameter Record Contents.
Table 21 Parameter Record Contents
Length (in
Field bytes) Contents
ACTION 1 "P"
RELATION 31 Relation name
RECORD_TYPE 4 Zero
DATA_LEN 2 Length of parameter string
NBV_LEN 2 Zero
LDBK 8 Zero
START_TAD 8 Zero
COMMIT_TAD 8 Zero
TSN 8 Zero
DATA ? Variable length parameter string
content
34.2.4.13 – Quick Sort Limit
Quick_Sort_Limit=integer
Specifies the maximum number of records that will be sorted with
the in-memory "quick sort" algorithm.
The default value is 5000 records. The minimum value that can be
specified is 10 and the maximum value is 100,000.
Larger values specified for the /Quick_Sort_Limit qualifier may
reduce sort work file IO at the expense of additional CPU time
and/or memory consumption. A value that is too small may result
in additional disk file IO. In general, the default value should
be accepted.
34.2.4.14 – Restart
Restart=restart-point
Specifies an AIJ Extract Restart Control Point (AERCP) that
indicates the location to begin the extraction. The AERCP
indicates the transaction sequence number (TSN) of the last
extracted transaction along with a location in the .aij file
where a known "Micro-quiet point" exists.
When the Restart qualifier is not specified and no input after-
image journal files are specified on the command line, the
Continuous LogMiner process starts extracting at the beginning
of the earliest modified online after-image journal file.
When formatted for text display, the AERCP structure consists of
the six fields (the MBZ field is excluded) displayed as unsigned
integers separated by dashes; for example, "1-28-12-7-3202-3202".
34.2.4.15 – Restore Metadata
Restore_Metadata=file-spec
Specifies that the RMU Unload After_Journal command is to read
database metadata information from the specified file. The
Database parameter is required but the database itself is not
accessed when the Restore_Metadata qualifier is specified. The
default file type is .metadata. The Continuous qualifier is not
allowed when the Restore_Metadata qualifier is present.
Because the database is not available when the Restore_Metadata
qualifier is specified, certain database-specific actions cannot
be taken. For example, checks for after-image journaling are
disabled. Because the static copy of the metadata information is
not updated as database structure and table changes are made, it
is important to make sure that the metadata file is saved after
database DML operations.
When the Restore_Metadata qualifier is specified, additional
checks are made to ensure that the after-image journal files
were created using the same database that was used to create the
metadata file. These checks provide additional security and help
prevent accidental mismatching of files.
34.2.4.16 – Save Metadata
Save_Metadata=file-spec
Specifies that the RMU Unload After_Journal command is to
write metadata information to the named file. The Continuous,
Restore_Metadata, Table, and Options=file qualifiers and the
aij-file-name parameter are not allowed when the Save_Metadata
qualifier is present. The default file type is .metadata.
34.2.4.17 – Select
Select=selection-type
Specifies if the date and time of the Before and Since qualifiers
refer to transaction start time or transaction commit time.
The following options can be specified as the selection-type of
the Select qualifier:
o Commit_Transaction
Specifies that the Before and Since qualifiers select
transactions based on the time of the transaction commit.
o Start_Transaction
Specifies that the Before and Since qualifiers select
transactions based on the time of the transaction start.
The default for date selection is Commit_Transaction.
34.2.4.18 – Since
Since=date-time
Specifies the starting time for transactions to be extracted.
Depending on the value specified in the Select qualifier,
transactions that committed or started on or after the specified
Since date are selected. Information from transactions that
committed or started prior to the specified Since date is not
included in the output.
34.2.4.19 – Sort Workfiles
Sort_Workfiles=integer
Specifies the number of sort work files. The default number
of sort work files is two. When large transactions are being
extracted, using additional sort work files may improve
performance by distributing I/O loads over multiple disk devices.
Use the SORTWORKn (where n is a number from 0 to 9) logical names
to specify the location of the sort work files.
34.2.4.20 – Statistics Interval
Statistics_Interval=integer
Specifies that statistics are to be displayed at regular
intervals so that you can evaluate the progress of the unload
operation.
The displayed statistics include:
o Elapsed time
o CPU time
o Buffered I/O
o Direct I/O
o Page faults
o Number of records unloaded for a table
o Total number of records extracted for all tables
If the Statistics_Interval qualifier is specified, the default
interval is 60 seconds. The minimum value is one second. If the
unload operation completes successfully before the first time
interval has passed, you will receive an informational message
on the number of files unloaded. If the unload operation is
unsuccessful before the first time interval has passed, you will
receive error messages and statistics on the number of records
unloaded.
At any time during the unload operation, you can press Ctrl/T to
display the current statistics.
34.2.4.21 – Symbols
Symbols
Nosymbols
Specifies whether DCL symbols are to be created, indicating
information about records extracted for each table.
If a large enough number of tables is being unloaded, too many
associated symbols are created, and the CLI symbol table space
can become exhausted. The error message "LIB-F-INSCLIMEM,
insufficient CLI memory" is returned in this case. Specify the
Nosymbols qualifier to prevent creation of the symbols.
The default is Symbols, which causes the symbols to be created.
34.2.4.22 – Table
Table=(Name=table-name, table-options)
Specifies the name of a table to be unloaded and an output
destination. The table-name must be a table within the database.
Views, indexes, and system relations may not be unloaded from the
after-image journal file.
The asterisk (*) and the percent sign (%) wildcard characters
can be used in the table name specification to select all tables
that satisfy the components you specify. The asterisk matches
zero or more characters and the percent sign matches a single
character.
For table name specifications that contain wild card characters,
if the first character of the string is a pound sign (#),
the wildcard specification is changed to a "not matching"
comparison. This allows exclusion of tables based on a wildcard
specification. The pound sign designation is only evaluated when
the table name specification contains an asterisk or percent
sign.
For exmple, a table name specification of "#FOO%" indicates that
all table names that are four characters long and do not start
with the string "FOO" are to be selected.
The following table-options can be specified with the Table
qualifier:
o Callback_Module=image-name, Callback_Routine=routine-name
The LogMiner process uses the OpenVMS library routine
LIB$FIND_IMAGE_SYMBOL to activate the specified shareable
image and locate the specified entry point routine name. This
routine is called with each extracted record. A final call is
made with the Action field set to "E" to indicate the end of
the output stream. These options must be specified together.
o Control
Use the Control table option to produce output files that
can be used by SQL*Loader to load the extracted data into an
Oracle database. This option must be used in conjunction with
fixed text format for the data file. The Control table option
can be specified on either the command line or in an options
file.
o Output=file-spec
If an Output file specification is present, unloaded records
are written to the specified location.
o Record_Definition=file-spec
The Record_Definition=file-spec option can be used to create a
record definition file for the output data. The default file
type is .rrd; the default file name is the name of the table.
o Table_Definition=file-spec
You can use the Table_Definition=file-spec option to create
a file that contains an SQL statement that creates a table
to hold transaction data. The default file type is .sql; the
default file name is the name of the table.
Unlike other qualifiers where only the final occurrence of the
qualifier is used by an application, the Table qualifier can
be specified multiple times for the RMU Unload After_Journal
command. Each occurrence of the Table qualifier must specify a
different table.
34.2.4.23 – Trace
Trace
Notrace
Specifies that the unloading of the .aij file be traced. The
default is Notrace. When the unload operation is traced, the
output from the Trace qualifier identifies transactions in the
.aij file by TSNs and describes what Oracle RMU did with each
transaction during the unload process. You can specify the Log
qualifier with the Trace qualifier.
34.2.5 – Usage Notes
o To use the RMU Unload After_Journal command for a database,
you must have the RMU$DUMP privilege in the root file access
control list (ACL) for the database or the OpenVMS SYSPRV or
BYPASS privilege.
o Oracle Rdb after-image journaling protects the integrity
of your data by recording all changes made by committed
transactions to a database in a sequential log or journal
file. Oracle Corporation recommends that you enable after-
image journaling to record your database transaction activity
between full backup operations as part of your database
restore and recovery strategy. In addition to LogMiner for
Rdb, the after-image journal file is used to enable several
database performance enhancements such as the fast commit, row
cache, and hot standby features.
o When the Continuous qualifier is not specified, you can only
extract changed records from a backup copy of the after-image
journal files. You create this file using the RMU Backup
After_Journal command.
You cannot extract from an .aij file that has been optimized
with the RMU Optimize After_Journal command.
o As part of the extraction process, Oracle RMU sorts extracted
journal records to remove duplicate record updates. Because
.aij file extraction uses the OpenVMS Sort/Merge Utility
(SORT/MERGE) to sort journal records for large transactions,
you can improve the efficiency of the sort operation by
changing the number and location of the work files used by
SORT/MERGE. The number of work files is controlled by the
Sort_Workfiles qualifier of the RMU Unload After_Journal
command. The allowed values are 1 through 10 inclusive, with
a default value of 2. The location of these work files can be
specified with device specifications, using the SORTWORKn
logical name (where n is a number from 0 to 9). See the
OpenVMS documentation set for more information on using
SORT/MERGE. See the Oracle Rdb7 Guide to Database Performance
and Tuning for more information on using these Oracle Rdb
logical names.
o When extracting large transactions, the RMU Unload After_
Journal command may create temporary work files. You can
redirect the .aij rollforward temporary work files to a
different disk and directory location than the current default
directory by assigning a different directory to the RDM$BIND_
AIJ_WORK_FILE logical name in the LNM$FILE_DEV name table.
This can help to alleviate I/O bottlenecks that might occur on
the default disk.
o You can specify a search list by defining logicals
RDM$BIND_AIJ_WORK_FILEn, with each logical pointing to
a different device or directory. The numbers must start
with 1 and increase sequentially without any gaps. When an
AIJ file cannot be created due to a "device full" error,
Oracle Rdb looks for the next device in the search list
by translating the next sequential work file logical. If
RDM$BIND_AIJ_WORK_FILE is defined, it is used first.
o The RMU Unload After_Journal command can read either a backed
up .aij file on disk or a backed up .aij file on tape that is
in the Old_File format.
o You can select one or more tables to be extracted from an
after-image journal file. All tables specified by the Table
qualifier and all those specified in the Options file are
combined to produce a single list of output streams. A
particular table can be specified only once. Multiple tables
can be written to the same output destination by specifying
the exact same output stream specification (that is, by using
an identical file specification).
o At the completion of the unload operation, RMU creates a
number of DCL symbols that contain information about the
extraction statistics. For each table extracted, RMU creates
the following symbols:
- RMU$UNLOAD_DELETE_COUNT_tablename
- RMU$UNLOAD_MODIFY_COUNT_tablename
- RMU$UNLOAD_OUTPUT_tablename
The tablename component of the symbol is the name of the
table. When multiple tables are extracted in one operation,
multiple sets of symbols are created. The value for the
symbols RMU$UNLOAD_MODIFY_COUNT_tablename and RMU$UNLOAD_
DELETE_COUNT_tablename is a character string containing
the number of records returned for modified and deleted
rows. The RMU$UNLOAD_OUTPUT_tablename symbol is a character
string indicating the full file specification for the output
destination, or the shareable image name and routine name when
the output destination is an application callback routine.
o When you use the Callback_Module and Callback_Routine option,
you must supply a shareable image with a universal symbol or
entry point for the LogMiner process to be able to call your
routine. See the OpenVMS documentation discussing the Linker
utility for more information about creating shareable images.
o Your Callback_Routine is called once for each output record.
The Callback_Routine is passed two parameters:
- The length of the output record, by longword value
- A pointer to the record buffer
The record buffer is a data structure of the same fields and
lengths written to an output destination.
o Because the Oracle RMU image is installed as a known image,
your shareable image must also be a known image. Use the
OpenVMS Install Utility to make your shareable image known.
You may wish to establish an exit handler to perform any
required cleanup processing at the end of the extraction.
o Segmented string data (BLOB) cannot be extracted using the
LogMiner process. Because the segmented string data is
related to the base table row by means of a database key,
there is no convenient way to determine what data to extract.
Additionally, the data type of an extracted column is changed
from LIST OF BYTE VARYING to BIGINT. This column contains
the DBKEY of the original BLOB data. Therefore, the contents
of this column should be considered unreliable. However, the
field definition itself is extracted as a quadword integer
representing the database key of the original segmented string
data. In generated table definition or record definition
files, a comment is added indicating that the segmented string
data type is not supported by the LogMiner for Rdb feature.
o Records removed from tables using the SQL TRUNCATE TABLE
statement are not extracted. The SQL TRUNCATE TABLE statement
does not journal each individual data record being removed
from the database.
o Records removed from tables using the SQL ALTER DATABASE
command with the DROP STORAGE AREA clause and CASCADE keyword
are not extracted. Any data deleted by this process is not
journalled.
o Records removed by dropping tables using the SQL DROP TABLE
statement are not extracted. The SQL DROP TABLE statement does
not journal each individual data record being removed from the
database.
o When the RDMS$CREATE_LAREA_NOLOGGING logical is defined, DML
operations are not available for extraction between the time
the table is created and when the transaction is committed.
o Tables that use the vertical record partitioning (VRP) feature
cannot be extracted using the LogMiner feature. LogMiner
software currently does not detect these tables. A future
release of Oracle Rdb will detect and reject access to
vertically partitioned tables.
o In binary format output, VARCHAR fields are not padded with
spaces in the output file. The VARCHAR data type is extracted
as a 2-byte count field and a fixed-length data field. The 2-
byte count field indicates the number of valid characters in
the fixed-length data field. Any additional contents in the
data field are unpredictable.
o You cannot extract changes to a table when the table
definition is changed within an after-image journal file.
Data definition language (DDL) changes to a table are not
allowed within an .aij file being extracted. All records in an
.aij file must be the current record version. If you are going
to perform DDL operations on tables that you wish to extract
using the LogMiner for Rdb, you should:
1. Back up your after-image journal files.
2. Extract the .aij files using the RMU Unload After_Journal
command.
3. Make the DDL changes.
o Do not use the OpenVMS Alpha High Performance Sort/Merge
utility (selected by defining the logical name SORTSHR
to SYS$SHARE:HYPERSORT) when using the LogMiner feature.
HYPERSORT supports only a subset of the library sort routines
that LogMiner requires. Make sure that the SORTSHR logical
name is not defined to HYPERSORT.
o The metadata information file used by the RMU Unload After_
Journal command is in an internal binary format. The contents
and format are not documented and are not directly accessible
by other utilities. The content and format of the metadata
information file is specific to a version of the RMU Unload
After_Journal utility. As new versions and updates of Oracle
Rdb are released, you will proably have to re-create the
metadata information file. The same version of Oracle Rdb must
be used to both write and read a metadata information file.
The RMU Unload After_Journal command verifies the format and
version of the metadata information file and issues an error
message in the case of a version mismatch.
o For debugging purposes, you can format and display the
contents of a metadata information file by using the
Options=Dump qualifier with the Restore_Metadata qualifier.
This dump may be helpful to Oracle Support engineers during
problem analysis. The contents and format of the metadata
information file are subject to change.
o If you use both the Output and Statistics_Interval qualifiers,
the output stream used for the log, trace, and statistics
information is flushed to disk (via the RMS $FLUSH service) at
each statistics interval. This makes sure that an output file
of trace and log information is written to disk periodically.
o You can specify input backup after-image journal files along
with the Continuous qualifier from the command line. The
specified after-image journal backup files are processed in
an offline mode. Once they have been processed, the RMU Unload
After_Journal command switches to "online" mode and the active
online journals are processed.
o When no input after-image journal files are specified on the
command line, the Continuous LogMiner starts extracting at the
beginning of the earliest modified online after-image journal
file. The Restart= qualifier can be used to control the first
transaction to be extracted.
o The Continuous LogMiner requires fixed-size circular after-
image journals.
o An after-image journal file cannot be backed up if there
are any Continuous LogMiner checkpoints in the aij file.
The Continuous LogMiner moves its checkpoint to the physical
end-of-file for the online .aij file that it is extracting.
o In order to ensure that all records have been written by all
database users, Continuous LogMiner processes do not switch
to the next live journal file until it has been written to by
another process. Live journals SHOULD NOT be backed up while
the Continuous LogMiner process is processing a list of .aij
backup files. This is an unsupported activity and could lead
to the LogMiner losing data.
o If backed up after-image journal files are specified on the
command line and the Continuous qualifier is specified, the
journal sequence numbers must ascend directly from the backed
up journal files to the online journal files.
In order to preserve the after-image journal file sequencing
as processed by the RMU Unload After_Journal /Continuous
command, it is important that no after-image journal backup
operations are attempted between the start of the command and
when the Continuous LogMiner process reaches the live online
after-image journals.
o You can run multiple Continuous LogMiner processes at one
time on a database. Each Continuous LogMiner process acts
independently.
o The Continuous LogMiner reads the live after-image journal
file just behind writers to the journal. This will likely
increase the I/O load on the disk devices where the journals
are located. The Continuous LogMiner attempts to minimize
unneeded journal I/O by checking a "High Water Mark" lock to
determine if the journal has been written to and where the
highest written block location is located.
o Vertically partitioned tables cannot be extracted.
34.2.6 – Examples
Example 1
The following example unloads the EMPLOYEES table from the .aij
backup file MFP.AIJBCK.
RMU /UNLOAD /AFTER_JOURNAL MFP.RDB MFP.AIJBCK -
/TABLE = (NAME = EMPLOYEES, OUTPUT = EMPLOYEES.DAT)
Example 2
The following example simultaneously unloads the SALES,
STOCK, SHIPPING, and ORDERS tables from the .aij backup files
MFS.AIJBCK_1-JUL-1999 through MFS.AIJBCK_3-JUL-1999. Note that
the input .aij backup files are processed sequentially in the
order specified.
$ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
MFS.AIJBCK_1-JUL-1999, -
MFS.AIJBCK_2-JUL-1999, -
MFS.AIJBCK_3-JUL-1999 -
/TABLE = (NAME = SALES, OUTPUT = SALES.DAT) -
/TABLE = (NAME = STOCK, OUTPUT = STOCK.DAT) -
/TABLE = (NAME = SHIPPING, OUTPUT = SHIPPING.DAT) -
/TABLE = (NAME = ORDER, OUTPUT = ORDER.DAT)
Example 3
Use the Before and Since qualifiers to unload data based on a
time range. The following example extracts changes made to the
PLANETS table by transactions that committed between 1-SEP-1999
at 14:30 and 3-SEP-1999 at 16:00.
$ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB MFS.AIJBCK -
/TABLE = (NAME = PLANETS, OUTPUT = PLANETS.DAT) -
/BEFORE = "3-SEP-1999 16:00:00.00" -
/SINCE = "1-SEP-1999 14:30:00.00"
Example 4
The following example simultaneously unloads the SALES and
STOCK tables from all .aij backup files that match the wildcard
specification MFS.AIJBCK_1999-07-*. The input .aij backup files
are processed sequentially in the order returned from the file
system.
$ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
MFS.AIJBCK_1999-07-* -
/TABLE = (NAME = SALES, OUTPUT = SALES.DAT) -
/TABLE = (NAME = STOCK, OUTPUT = STOCK.DAT)
Example 5
The following example unloads the TICKER table from the .aij
backup files listed in the file called AIJ_BACKUP_FILES.DAT
(note the double quotation marks surrounding the at (@) character
and the file specification). The input .aij backup files are
processed sequentially. The output records are written to the
mailbox device called MBA127:. A separate program is already
running on the system, and it reads and processes the data
written to the mailbox.
$ RMU /UNLOAD /AFTER_JOURNAL MFS.RDB -
"@AIJ_BACKUP_FILES.DAT" -
/TABLE = (NAME = TICKER, OUTPUT = MBA127:)
Example 6
You can use the RMU Unload After_Journal command followed by RMU
Load commands to move transaction data from one database into
a change table in another database. You must create a record
definition (.rrd) file for each table being loaded into the
target database. The record definition files can be created by
specifying the Record_Definition option on the Table qualifier.
$ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB MYAIJ.AIJBCK -
/TABLE = ( NAME = MYTBL, -
OUTPUT = MYTBL.DAT, -
RECORD_DEFINITION=MYLOGTBL) -
/TABLE = ( NAME = SALE, -
OUTPUT=SALE.DAT, -
RECORD_DEFINITION=SALELOGTBL)
$ RMU /LOAD WAREHOUSE.RDB MYLOGTBL MYTBL.DAT -
/RECORD_DEFINITION = FILE = MYLOGTBL.RRD
$ RMU /LOAD WAREHOUSE.RDB SALELOGTBL SALE.DAT -
/RECORD_DEFINITION = FILE = SALELOGTBL.RRD
Example 7
You can use an RMS file containing the record structure
definition for the output file as an input file to the RMU Load
command. The record description uses the CDO record and field
definition format. This is the same format used by the RMU Load
and RMU Unload commands when the Record_Definition qualifier is
used. The default file extension is .rrd.
The record definitions for the fields that the LogMiner processs
writes to the output .rrd file are shown in the following table.
These fields can be manually appended to a record definition file
for the actual user data fields being unloaded. The file can be
used to load a transaction table within a database. A transaction
table is the output that the LogMiner process writes to a table
consisting of sequential transactions performed in a database.
DEFINE FIELD RDB$LM_ACTION DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD RDB$LM_RELATION_NAME DATATYPE IS TEXT SIZE IS 31.
DEFINE FIELD RDB$LM_RECORD_TYPE DATATYPE IS SIGNED LONGWORD.
DEFINE FIELD RDB$LM_DATA_LEN DATATYPE IS SIGNED WORD.
DEFINE FIELD RDB$LM_NBV_LEN DATATYPE IS SIGNED WORD.
DEFINE FIELD RDB$LM_DBK DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD RDB$LM_START_TAD DATETYPE IS DATE
DEFINE FIELD RDB$LM_COMMIT_TAD DATATYPE IS DATE
DEFINE FIELD RDB$LM_TSN DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD RDB$LM_RECORD_VERSION DATATYPE IS SIGNED WORD.
Example 8
Instead of using the Table qualifier, you can use an Options file
to specify the table or tables to be extracted, as shown in the
following example.
$ TYPE TABLES.OPTIONS
TABLE=MYTBL, OUTPUT=MYTBL.DAT
TABLE=SALES, OUTPUT=SALES.DAT
$ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB MYAIJ.AIJBCK -
/OPTIONS = FILE = TABLES.OPTIONS
Example 9
The following example unloads the EMPLOYEES table from the live
database and writes all change records to the MBA145 device. A
separate program is presumed to be reading the mailbox at all
times and processing the records.
$ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
/TABLE = (NAME = EMPLOYEES, OUTPUT = MBA145:)
Example 10
This example demonstrates unloading three tables (EMPLOYEES,
SALES, and CUSTOMERS) to a single mailbox. Even though the
mailbox is not a file-oriented device, the same file name is
specified for each. This is required because the LogMiner process
defaults the file name to the table name. If the same file name
is not explicitly specified for each output stream destination,
the LogMiner process assigns one mailbox channel for each table.
When the file name is the same for all tables, the LogMiner
process detects this and assigns only a single channel for all
input tables.
$ DEFINE MBX$ LOADER_MBX:X
$ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
/TABLE = (NAME = EMPLOYEES, OUTPUT = MBX$:) -
/TABLE = (NAME = SALES, OUTPUT = MBX$:) -
/TABLE = (NAME = CUSTOMERS, OUTPUT = MBX$:)
Example 11
In order to include transaction commit information, the
/Include =Action =Commit qualifier is specified in this example.
Additionally, the EMPLOYEES and SALES tables are extracted to two
different mailbox devices (ready by separate processes). A commit
record is written to each mailbox after all changed records for
each transaction have been extracted.
$ RMU /UNLOAD /AFTER_JOURNAL /CONTINUOUS MFP.RDB -
/INCLUDE = ACTION = COMMIT -
/TABLE = (NAME = EMPLOYEES, OUTPUT = LOADER_EMP_MBX:X) -
/TABLE = (NAME = SALES, OUTPUT = LOADER_SAL_MBX:X)
Example 12
In this example, multiple input backup after-image journal
files are supplied. The Order_AIJ_Files qualifier specifies
that the .aij files are to be processed in ascending order of
.aij sequence number (regardless of file name). Prior to the
extraction operation, each input file is opened and the .aij Open
record is read. The .aij files are then opened and extracted, one
at a time, by ascending .aij sequence number.
$ RMU /UNLOAD /AFTER_JOURNAL /LOG /ORDER_AIJ_FILES -
MFP.RDB *.AIJBCK -
/TABLE = (NAME = C1, OUTPUT=C1.DAT)
%RMU-I-UNLAIJFL, Unloading table C1 to DGA0:[DB]C1.DAT;1
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]ABLE.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "5"
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]BAKER.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "4"
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]CHARLIE.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "6"
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]BAKER.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "4"
%RMU-I-AIJMODSEQ, next AIJ file sequence number will be 5
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]ABLE.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "5"
%RMU-I-AIJMODSEQ, next AIJ file sequence number will be 6
%RMU-I-LOGOPNAIJ, opened journal file DGA0:[DB]CHARLIE.AIJBCK;1
%RMU-I-AIJRSTSEQ, journal sequence number is "6"
%RMU-I-AIJMODSEQ, next AIJ file sequence number will be 7
%RMU-I-LOGSUMMARY, total 7 transactions committed
%RMU-I-LOGSUMMARY, total 0 transactions rolled back
---------------------------------------------------------------------
ELAPSED: 0 00:00:00.15 CPU: 0:00:00.08 BUFIO: 62 DIRIO: 19 FAULTS: 73
Table "C1" : 3 records written (3 modify, 0 delete)
Total : 3 records written (3 modify, 0 delete)
Example 13
The SQL record definitions for the fields that the LogMiner
process writes to the output are shown in the following
example. These fields can be manually appended to the table
creation command for the actual user data fields being unloaded.
Alternately, the Table_Definition qualifier can be used with the
Table qualifier or within an Options file to automatically create
the SQL definition file. This can be used to create a transaction
table of changed data.
SQL> CREATE TABLE MYLOGTABLE (
cont> RDB$LM_ACTION CHAR,
cont> RDB$LM_RELATION_NAME CHAR (31),
cont> RDB$LM_RECORD_TYPE INTEGER,
cont> RDB$LM_DATA_LEN SMALLINT,
cont> RDB$LM_NBV_LEN SMALLINT,
cont> RDB$LM_DBK BIGINT,
cont> RDB$LM_START_TAD DATE VMS,
cont> RDB$LM_COMMIT_TAD DATE VMS,
cont> RDB$LM_TSN BIGINT,
cont> RDB$LM_RECORD_VERSION SMALLINT ...);
Example 14
The following example is the transaction table record definition
(.rrd) file for the EMPLOYEES table from the PERSONNEL database:
DEFINE FIELD RDB$LM_ACTION DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD RDB$LM_RELATION_NAME DATATYPE IS TEXT SIZE IS 31.
DEFINE FIELD RDB$LM_RECORD_TYPE DATATYPE IS SIGNED LONGWORD.
DEFINE FIELD RDB$LM_DATA_LEN DATATYPE IS SIGNED WORD.
DEFINE FIELD RDB$LM_NBV_LEN DATATYPE IS SIGNED WORD.
DEFINE FIELD RDB$LM_DBK DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD RDB$LM_START_TAD DATATYPE IS DATE.
DEFINE FIELD RDB$LM_COMMIT_TAD DATATYPE IS DATE.
DEFINE FIELD RDB$LM_TSN DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD RDB$LM_RECORD_VERSION DATATYPE IS SIGNED WORD.
DEFINE FIELD EMPLOYEE_ID DATATYPE IS TEXT SIZE IS 5.
DEFINE FIELD LAST_NAME DATATYPE IS TEXT SIZE IS 14.
DEFINE FIELD FIRST_NAME DATATYPE IS TEXT SIZE IS 10.
DEFINE FIELD MIDDLE_INITIAL DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD ADDRESS_DATA_1 DATATYPE IS TEXT SIZE IS 25.
DEFINE FIELD ADDRESS_DATA_2 DATATYPE IS TEXT SIZE IS 20.
DEFINE FIELD CITY DATATYPE IS TEXT SIZE IS 20.
DEFINE FIELD STATE DATATYPE IS TEXT SIZE IS 2.
DEFINE FIELD POSTAL_CODE DATATYPE IS TEXT SIZE IS 5.
DEFINE FIELD SEX DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD BIRTHDAY DATATYPE IS DATE.
DEFINE FIELD STATUS_CODE DATATYPE IS TEXT SIZE IS 1.
DEFINE RECORD EMPLOYEES.
RDB$LM_ACTION .
RDB$LM_RELATION_NAME .
RDB$LM_RECORD_TYPE .
RDB$LM_DATA_LEN .
RDB$LM_NBV_LEN .
RDB$LM_DBK .
RDB$LM_START_TAD .
RDB$LM_COMMIT_TAD .
RDB$LM_TSN .
RDB$LM_RECORD_VERSION .
EMPLOYEE_ID .
LAST_NAME .
FIRST_NAME .
MIDDLE_INITIAL .
ADDRESS_DATA_1 .
ADDRESS_DATA_2 .
CITY .
STATE .
POSTAL_CODE .
SEX .
BIRTHDAY .
STATUS_CODE .
END EMPLOYEES RECORD.
Example 15
The following C source code segment demonstrates the structure
of a module that can be used as a callback module and routine
to process employee transaction information from the LogMiner
process. The routine, Employees_Callback, would be called by the
LogMiner process for each extracted record. The final time the
callback routine is called, the RDB$LM_ACTION field will be set
to "E" to indicate the end of the output stream.
#include <stdio>
typedef unsigned char date_type[8];
typedef unsigned char dbkey_type[8];
typedef unsigned char tsn_type[8];
typedef struct {
unsigned char rdb$lm_action;
char rdb$lm_relation_name[31];
unsigned int rdb$lm_record_type;
unsigned short int rdb$lm_data_len;
unsigned short int rdb$lm_nbv_len;
dbkey_type rdb$lm_dbk;
date_type rdb$lm_start_tad;
date_type rdb$lm_commit_tad;
tsn_type rdb$lm_tsn;
unsigned short int rdb$lm_record_version;
char employee_id[5];
char last_name[14];
char first_name[10];
char middle_initial[1];
char address_data_1[25];
char address_data_2[20];
char city[20];
char state[2];
char postal_code[5];
char sex[1];
date_type birthday;
char status_code[1];
} transaction_data;
void employees_callback (unsigned int data_len, transaction_data
data_buf)
{ .
.
.
return;}
Use the C compiler (either VAX C or DEC C) to compile this
module. When linking this module, the symbol EMPLOYEES_CALLBACK
needs to be externalized in the shareable image. Refer to the
OpenVMS manual discussing the Linker utility for more information
about creating shareable images.
On OpenVMS Alpha systems, you can use a LINK command similar to
the following:
$ LINK /SHAREABLE = EXAMPLE.EXE EXAMPLE.OBJ + SYS$INPUT: /OPTIONS
SYMBOL_VECTOR = (EMPLOYEES_CALLBACK = PROCEDURE)
<Ctrl/Z>
On OpenVMS VAX systems, you can use a LINK command similar to the
following:
$ LINK /SHAREABLE = EXAMPLE.EXE EXAMPLE.OBJ + SYS$INPUT: /OPTIONS
UNIVERSAL = EMPLOYEES_CALLBACK
<Ctrl/Z>
Example 16
You can use triggers and a transaction table to construct a
method to replicate table data from one database to another
using RMU Unload After_Journal and RMU Load commands. This
data replication method is based on transactional changes
to the source table and requires no programming. Instead,
existing features of Oracle Rdb can be combined to provide this
functionality.
For this example, consider a simple customer information table
called CUST with a unique customer ID value, customer name,
address, and postal code. Changes to this table are to be
moved from an OLTP database to a reporting database system on
a periodic (perhaps nightly) basis.
First, in the reporting database, a customer table of the same
structure as the OLTP customer table is created. In this example,
this table is called RPT_CUST. It contains the same fields as the
OLTP customer table called CUST.
SQL> CREATE TABLE RPT_CUST
cont> CUST_ID INTEGER,
cont> CUST_NAME CHAR (50),
cont> CUST_ADDRESS CHAR (50),
cont> CUST_POSTAL_CODE INTEGER);
Next, a temporary table is created in the reporting database for
the LogMiner-extracted transaction data from the CUST table. This
temporary table definition specifies ON COMMIT DELETE ROWS so
that data in the temporary table is deleted from memory at each
transaction commit. A temporary table is used because there is no
need to journal changes to the table.
SQL> CREATE GLOBAL TEMPORARY TABLE RDB_LM_RPT_CUST (
cont> RDB$LM_RECORD_TYPE INTEGER,
cont> RDB$LM_DATA_LEN SMALLINT,
cont> RDB$LM_NBV_LEN SMALLINT,
cont> RDB$LM_DBK BIGINT,
cont> RDB$LM_START_TAD DATE VMS,
cont> RDB$LM_COMMIT_TAD DATE VMS,
cont> RDB$LM_TSN BIGINT,
cont> RDB$LM_RECORD_VERSION SMALLINT,
cont> CUST_ID INTEGER,
cont> CUST_NAME CHAR (50),
cont> CUST_ADDRESS CHAR (50),
cont> CUST_POSTAL_CODE INTEGER) ON COMMIT DELETE ROWS;
For data to be populated in the RPT_CUST table in the reporting
database, a trigger is created for the RDB_LM_RPT_CUST
transaction table. This trigger is used to insert, update,
or delete rows in the RPT_CUST table based on the transaction
information from the OLTP database for the CUST table. The unique
CUST_ID field is used to determine if customer records are to be
modified or added.
SQL> CREATE TRIGGER RDB_LM_RPT_CUST_TRIG
cont> AFTER INSERT ON RDB_LM_RPT_CUST
cont>
cont> -- Modify an existing customer record
cont>
cont> WHEN (RDB$LM_ACTION = 'M' AND
cont> EXISTS (SELECT RPT_CUST.CUST_ID FROM RPT_CUST
cont> WHERE RPT_CUST.CUST_ID =
cont> RDB_LM_RPT_CUST.CUST_ID))
cont> (UPDATE RPT_CUST SET
cont> RPT_CUST.CUST_NAME = RDB_LM_RPT_CUST.CUST_NAME,
cont> RPT_CUST.CUST_ADDRESS =
cont> RDB_LM_RPT_CUST.CUST_ADDRESS,
cont> RPT_CUST.CUST_POSTAL_CODE =
cont> RDB_LM_RPT_CUST.CUST_POSTAL_CODE
cont> WHERE RPT_CUST.CUST_ID = RDB_LM_RPT_CUST.CUST_ID)
cont> FOR EACH ROW
cont>
cont> -- Add a new customer record
cont>
cont> WHEN (RDB$LM_ACTION = 'M' AND NOT
cont> EXISTS (SELECT RPT_CUST.CUST_ID FROM RPT_CUST
cont> WHERE RPT_CUST.CUST_ID =
cont> RDB_LM_RPT_CUST.CUST_ID))
cont> (INSERT INTO RPT_CUST VALUES
cont> (RDB_LM_RPT_CUST.CUST_ID,
cont> RDB_LM_RPT_CUST.CUST_NAME,
cont> RDB_LM_RPT_CUST.CUST_ADDRESS,
cont> RDB_LM_RPT_CUST.CUST_POSTAL_CODE))
cont> FOR EACH ROW
cont>
cont> -- Delete an existing customer record
cont>
cont> WHEN (RDB$LM_ACTION = 'D')
cont> (DELETE FROM RPT_CUST
cont> WHERE RPT_CUST.CUST_ID = RDB_LM_RPT_CUST.CUST_ID)
cont> FOR EACH ROW;
Within the trigger, the action to take (for example, to add,
update, or delete a customer record) is based on the RDB$LM_
ACTION field (defined as D or M) and the existence of the
customer record in the reporting database. For modifications,
if the customer record does not exist, it is added; if it does
exist, it is updated. For a deletion on the OLTP database, the
customer record is deleted from the reporting database.
The RMU Load command is used to read the output from the LogMiner
process and load the data into the temporary table where each
insert causes the trigger to execute. The Commit_Every qualifier
is used to avoid filling memory with the customer records in
the temporary table because as soon as the trigger executes, the
record in the temporary table is no longer needed.
$ RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB OLTP.AIJBCK -
/TABLE = (NAME = CUST, -
OUTPUT = CUST.DAT, -
RECORD_DEFINITION = RDB_LM_RPT_CUST.RRD)
$ RMU /LOAD REPORT_DATABASE.RDB RDB_LM_RPT_CUST CUST.DAT -
/RECORD_DEFINITION = FILE = RDB_LM_RPT_CUST.RRD -
/COMMIT_EVERY = 1000
Example 17
The following example shows how to produce a control file that
can be used by SQL*Loader to load the extracted data into an
Oracle database.
$ RMU/UNLOAD/AFTER TEST_DB TEST_DB_AIJ1_BCK -
/FORMAT=TEXT -
/TABLE=(NAME=TEST_TBL, -
OUTPUT=LOGMINER_TEXT.TXT, -
CONTROL=LOGMINER_CONTROL.CTL, -
TABLE_DEFINITION=TEST_TBL.SQL)
This example produces the following control file. The control
file is specific to a fixed length record text file. NULLs are
handled by using the NULLIF clause for the column definition that
references a corresponding null byte filler column. There is a
null byte filler column for each column in the underlying table
but not for the LogMiner specific columns at the beginning of
the record. If a column is NULL, the corresponding RDB$LM_NBn
filler column is set to 1. VARCHAR columns are padded with blanks
but the blanks are ignored by default when the file is loaded by
SQL*Loader. If you wish to preserve the blanks, you can update
the control file and add the "PRESERVE BLANKS" clause.
-- Control file for LogMiner transaction data 25-AUG-2000 12:15:50.47
-- From database table "TEST_DB"
LOAD DATA
INFILE 'DISK:[DIRECTORY]LOGMINER_TEXT.TXT;'
APPEND INTO TABLE 'RDB_LM_TEST_TBL'
(
RDB$LM_ACTION POSITION(1:1) CHAR,
RDB$LM_RELATION_NAME POSITION(2:32) CHAR,
RDB$LM_RECORD_TYPE POSITION(33:44) INTEGER EXTERNAL,
RDB$LM_DATA_LEN POSITION(45:50) INTEGER EXTERNAL,
RDB$LM_NBV_LEN POSITION(51:56) INTEGER EXTERNAL,
RDB$LM_DBK POSITION(57:76) INTEGER EXTERNAL,
RDB$LM_START_TAD POSITION(77:90) DATE "YYYYMMDDHHMISS",
RDB$LM_COMMIT_TAD POSITION(91:104) DATE "YYYYMMDDHHMISS",
RDB$LM_TSN POSITION(105:124) INTEGER EXTERNAL,
RDB$LM_RECORD_VERSION POSITION(125:130) INTEGER EXTERNAL,
TEST_COL POSITION(131:150) CHAR NULLIF RDB$LM_NB1 = 1,
RDB$LM_NB1 FILLER POSITION(151:151) INTEGER EXTERNAL
)
Example 17
The following example creates a metadata file for the database
MFP. This metadata file can be used as input to a later RMU
Unload After_Journal command.
$ RMU /UNLOAD /AFTER_JOURNAL MFP /SAVE_METADATA=MF_MFP.METADATA /LOG
%RMU-I-LMMFWRTCNT, Wrote 107 objects to metadata file
"DUA0:[DB]MFMFP.METADATA;1"
Example 18
This example uses a previously created metadata information file
for the database MFP. The database is not accessed during the
unload operation; the database metadata information is read from
the file. As the extract operation no longer directly relies on
the source database, the AIJ and METADATA files can be moved to
another systems and extracted there.
$ RMU /UNLOAD /AFTER_JOURNAL /RESTORE_METADATA=MF_MFP.METADATA -
MFP AIJ_BACKUP1 /TABLE=(NAME=TAB1, OUTPUT=TAB1) /LOG
%RMU-I-LMMFRDCNT, Read 107 objects from metadata file
"DUA0:[DB]MF_MFP.METADATA;1"
%RMU-I-UNLAIJFL, Unloading table TAB1 to DUA0:[DB]TAB1.DAT;1
%RMU-I-LOGOPNAIJ, opened journal file DUA0:[DB]AIJ_BACKUP1.AIJ;1
%RMU-I-AIJRSTSEQ, journal sequence number is "7216321"
%RMU-I-AIJMODSEQ, next AIJ file sequence number will be 7216322
%RMU-I-LOGSUMMARY, total 2 transactions committed
%RMU-I-LOGSUMMARY, total 0 transactions rolled back
----------------------------------------------------------------------
ELAPSED: 0 00:00:00.15 CPU: 0:00:00.01 BUFIO: 11 DIRIO: 5 FAULTS: 28
Table "TAB1" : 1 record written (1 modify, 0 delete)
Total : 1 record written (1 modify, 0 delete)
35 – Verify
Checks the internal integrity of database data structures.
The RMU Verify command does not verify the data itself. You
can verify specific portions of a database or the integrity of
routines stored in the database by using qualifiers.
If you specify the RMU Verify command without any qualifiers, a
database root file verification and full page verification of the
area inventory page (AIP) and the area bit map (ABM) pages in the
default RDB$SYSTEM storage area are performed. Also, the snapshot
files and after-image journals are validated (even if journaling
has been disabled).
The RMU Verify command checks space area management (SPAM) pages
for proper format. The contents of the individual entries are
verified as the individual data pages are verified. The command
does not attempt to determine if data within rows is reasonable
or plausible.
35.1 – Description
The RMU Verify command checks the internal integrity of database
data structures. Oracle Corporation strongly recommends that
you verify your database following any kind of serious system
malfunction. You should also verify your database as part of
routine maintenance, perhaps before performing backup operations.
You can use the various qualifiers to perform verification of the
maximum number of database areas in the time available.
NOTE
If you use the RMU Convert command with the Nocommit
qualifier to convert a database created prior to Oracle Rdb
Version 6.1, and then use the RMU Convert command with the
Rollback qualifier to revert to the prior database structure
level, subsequent verify operations might return an RMU-W-
PAGTADINV warning message. See the Usage_Notes help entry
under this command for details.
35.2 – Format
(B)0[m RMU/Verify root-file-spec
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/All x See description
/Areas [ = storage-area-list] x No Area checking performed
/Checksum_Only x Full page verification
/[No]Constraints = [(Options)] x /NoConstraint
/[No]Data x /Data when /Indexes is used
/End=page-number x /End=last-page
/[No]Functions x /Nofunctions
/Incremental x See description
/Indexes [ = index-list] x No index checking performed
/Lareas [ = logical-area-list] x No LAREA checking performed
/[No]Log x Current DCL verify value
/Output=file-spec x SYS$OUTPUT
/[No]Root x /Root
/[No]Routines x /Noroutines
/[No]Segmented_Strings x See description
/Snapshots x No snapshot verification
/Start=page-number x /Start=1
/Transaction_Type=option x /Transaction_Type=Protected
35.3 – Parameters
35.3.1 – root-file-spec
The Oracle Rdb database to verify. The default file extension is
.rdb.
35.4 – Command Qualifiers
35.4.1 – All
All
When you specify the All qualifier, the entire database is
checked, including any external routines. Specifying the All
qualifier is equivalent to issuing the list of qualifiers shown
in the following command:
$ RMU/VERIFY/ROOT/CONSTRAINTS/INDEXES/DATA/AREAS -
_$ /SNAPSHOTS/LAREAS/ROUTINES MF_PERSONNEL.RDB
If you do not specify the All qualifier, the verification
requested by the other qualifiers you specify is performed.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
35.4.2 – Areas
Areas[=storage-area-list]
Specifies the storage areas of the database to verify. You can
specify storage areas by name or by the area's ID number. When
you specify the storage area by name, each storage area name must
be the name defined in the SQL CREATE STORAGE AREA statement for
the storage area, not the storage area file name. If you list
multiple storage areas, separate the storage area names or ID
numbers with a comma, and enclose the storage area list within
parentheses. The Areas qualifier with no arguments (or Areas=*)
directs Oracle RMU to verify all storage areas of the database.
With a single-file database, if you do not specify a storage area
name, the RDB$SYSTEM storage area is verified.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
The Areas qualifier can be used with indirect file references.
See the Indirect-Command-Files Help entry for more information.
When the Areas qualifier is not specified, Oracle RMU does not
verify any storage areas.
35.4.3 – Checksum Only
Checksum_Only
Specify with the Areas qualifier to perform only checksum
verification of pages. This reduces the degree of verification
done on a database page. While the RMU Verify command executes
faster with the Checksum_Only qualifier than without it, it does
not verify pages completely. This qualifier allows you to make
trade-offs between speed of verification and thoroughness of
verification. For more information on these trade-offs, see the
Oracle Rdb Guide to Database Maintenance.
If this command finds a problem with a certain page, then that
page can be verified in depth by using other qualifiers, such as
Indexes, Areas, or Lareas.
Note that you can accomplish the same degree of verification
during a backup operation by specifying the Checksum qualifier
with the RMU Backup command. The advantage of specifying the
Checksum qualifier with the RMU Backup command is that the
checksum operation takes place concurrently with the backup
operation.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
The default is for full verification of pages.
35.4.4 – Constraints
Constraints
Constraints[=(Constraints=(list))]
Constraints[=(Tables=(list))]
Constraints[=(Tables=(list), Constraints=(list))]
Noconstraints
Specifies which constraints Oracle RMU is to load and execute
to check the integrity of data in the database. In addition,
external routines (procedures and functions) referenced by
constraints are activated and executed. Any exceptions produced
cause the verify operation to report a failure. See the
description of the routines qualifier for information on how
routines are activated and executed.
The options are as follows:
o Tables=(list)
Specifies the table for which constraints are to be checked.
If you specify more than one table, separate each table name
with a comma and enclose the list in parentheses. You can
specify the wildcard character, the asterisk (*), instead of
a table list to indicate that you want constraints checked
for all tables in the database. This option is useful if you
issued an RMU Load command with the Noconstraints qualifier.
o Constraints=(list)
Specifies the constraints which you want Oracle RMU to load
and execute. If you specify more than one constraint, separate
each constraint name with a comma and enclose the list in
parentheses. You can specify the wildcard character, the
asterisk (*), instead of a constraint list to indicate that
you want all constraints checked for the database.
o (Tables=(list), Constraints=(list))
You can specify both the Tables and Constraints options to
specify which combination of tables and constraints you
want Oracle RMU to verify. If you specify the wildcard
character, the asterisk (*), for the Tables option and a
named constraint or constraints for the Constraint option
within the same Oracle RMU command line, Oracle RMU verifies
all constraints.
See the Oracle Rdb Guide to Database Maintenance for more
information on verifying constraints.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
The default is the Noconstraints qualifier. When you specify
the Noconstraints qualifier, Oracle RMU does not verify any
constraints.
35.4.5 – Data
Data
Nodata
Specifies whether consistency checks are made between indexes and
tables. When you specify the Data qualifier, Oracle RMU checks
that every row to which an index points is a valid row for the
table and it checks that every row in a table is pointed to by
every index defined on the table. See the description of the
Indexes qualifier for more information on how these comparisons
are made.
The Data qualifier is valid only when it is used with the Indexes
qualifier.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
The default is the Data qualifier.
35.4.6 – End
End=page-number
Specifies the last page to be verified. This qualifier is used in
conjunction with the Areas and Lareas qualifiers. If you do not
use the End qualifier, Oracle RMU verifies all pages between the
first page (or the page specified in the Start qualifier) and the
last page of the storage area.
The End qualifier is valid only when you specify the Areas or
Lareas qualifier.
See the Usage Notes entry in this command for the rules that
determine which other qualifiers can be used in combination on
the same RMU Verify command line.
35.4.7 – Functions
Functions
Nofunctions
This qualifier is synonymous with the Routines qualifier. See the
description of the Routines qualifier.
35.4.8 – Incremental
Incremental
Directs Oracle RMU to verify database pages that have changed
since the last full or incremental verification. Oracle RMU
stores timestamps in the database root file for both full
and incremental verifications. To determine which pages
have changed since the last verify operation, Oracle RMU
compares these timestamps with the page timestamps. The page
timestamps are updated whenever pages are updated. An incremental
verification performs the same number of I/O operations as a
full verification, but the incremental verification takes fewer
CPU cycles than a full verification, allowing you to perform
incremental verifications more frequently than you would perform
full ones. The default is to perform a full verification.
NOTE
If you use the Incremental qualifier with the RMU Verify
command, Oracle Corporation recommends that you use it only
with the All qualifier and not with any other qualifiers.
The timestamps in the database root file are updated during
full and incremental verifications only when the All
qualifier is specified. Therefore, if you do not specify
the All qualifier, two successive incremental verifications
of the same storage area of the database perform the same
verifications. This means that the second incremental
verification does not pass over pages verified by the first
incremental verification, contrary to what you might expect.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
If the Incremental qualifier is not specified, all requested
pages are verified, regardless of the timestamp.
35.4.9 – Indexes
Indexes[=index-list]
Verifies the integrity of all but disabled indexes in the
database if you specify the Indexes or the Indexes=* qualifier;
verifies the integrity of a specific index, or of multiple
indexes if you provide an index list. If you list multiple
indexes, separate the index names with a comma, and enclose the
index list within parentheses.
Beginning with Oracle Rdb V7.0, Oracle RMU uses a new method to
verify indexes. In prior versions, the verify operation tried
to retrieve the table row to which the index pointed. Beginning
with Oracle Rdb V7.0, the verify operation creates a sorted list
of all dbkeys for a table and a sorted list of all dbkeys in an
index. By comparing these two lists, the verify operation can
detect any cases of an index missing an entry for a data row. In
addition, the verify operation runs faster. This comparison of
dbkeys occurs at the end of the verify operation. If you specify
the log qualifier, you see messages similar to the following to
indicate that the comparison is occurring:
%RMU-I-IDXVERSTR, Index data verification of logical area
60 (DEGREES) started.
%RMU-I-IDXVEREND, Index data verification of logical area
60 finished.
In addition, beginning in Oracle Rdb V7.0, when you verify an
index with the Data qualifier (the default), Oracle RMU also
verifies the logical areas referenced by the indexes. See Example
5 in the Examples help entry under this command.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
By default, Oracle RMU does not verify indexes.
The Indexes qualifier can be used with indirect file references.
See the Indirect-Command-Files Help entry for more information.
35.4.10 – Lareas
Lareas[=logical-area-list]
Specifies the storage area pages allocated to a logical area
or logical areas that you want verified. If you list multiple
logical areas, separate the logical area names with a comma,
and enclose the logical area list within parentheses. The Lareas
qualifier with no arguments (or Lareas=*) directs Oracle RMU to
verify all logical areas of the database. When a logical area is
verified, each page in the area is read and verified sequentially
starting at the first page.
If an index name is specified with the Lareas qualifier, the
index is verified, but it is not verified as a logical area.
In this case, the first index record is fetched (which could be
on any page) and the verification follows the structure of the
index. (For example, if the index record points to other index
records, then those records are fetched and verified. If the
index node is a leaf node, then the data record is fetched and
verified. These data pages might reside in different logical
areas.)
Use this qualifier to verify one or more tables.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
The Lareas qualifier can be used with indirect file references.
See the Indirect-Command-Files Help entry for more information.
By default, Oracle RMU does not verify logical areas.
35.4.11 – Log
Log
Nolog
Specifies whether the processing of the command is reported to
SYS$OUTPUT. By default, SYS$OUTPUT is your terminal. Specify the
Log qualifier to request that each verify operation be displayed
to SYS$OUTPUT and the Nolog qualifier to prevent this display.
If you specify neither, the default is the current setting of the
DCL verify switch. (The DCL SET VERIFY command controls the DCL
verify switch.)
When you specify the Log qualifier, Oracle RMU displays the time
taken to verify each database area specified and the total time
taken for the complete verification operation. The display from
the Log qualifier is also useful for showing you how much of the
verification operation is completed.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
35.4.12 – Output
Output=file-spec
Specifies the name of the file where output will be sent. The
default is SYS$OUTPUT. When you specify a file name, the default
output file type is .lis.
If you specify both the Log qualifier and the Output qualifier,
the messages produced by the Log qualifier and any error messages
are directed into the output file specification. If you specify
only the Output qualifier, only error messages are captured
in the output file. See the Usage Notes entry in this command
for the rules that determine which qualifiers can be used in
combination on the same RMU Verify command line.
35.4.13 – Root
Root
Noroot
Specifies that, in a multifile database, only fields in the
database root (.rdb) file and all the pointers to the database
(.rda, .snp, .aij) files are verified. The snapshot (.snp) files
are validated; that is, only the first page is checked to make
sure that it is indeed an .snp file and belongs to the database
being verified. If after-image journaling is enabled, the .aij
files are validated. The AIP and ABM pages are verified when you
specify the Root qualifier.
If you specify the Noroot qualifier, and no other qualifiers,
only the AIP pages are verified. If you specify the Noroot
qualifier, and the Areas or the Lareas qualifier, ABM and SPAM
pages are verified as the other pages in the storage area or
logical area are verified.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
You can specify the Root qualifier for a single-file database.
The default is the Root qualifier.
35.4.14 – Routines
Routines
Noroutines
The Routines qualifier verifies the integrity of all routine
(function and procedure) definitions stored in the database.
Oracle RMU performs the verification by activating and
deactivating each external routine, one at a time. Any exceptions
produced cause the verify operation to report a failure.
The Routines qualifier verifies that the shareable image is
located where expected, is accessible, and that the correct entry
point is at this location. The expected location is that which
was specified in the SQL CREATE FUNCTION or CREATE PROCEDURE
statement. If the shareable image is not in the expected
location, is not accessible, or the entry point is not at the
expected location, you receive an error message.
If Oracle RMU is installed with SYSPRV, any external routine
image for a routine that is registered with client-site binding
must meet the following criteria or the RMU Verify command cannot
check for the existence of the entry point for the routine in the
image:
o It must be installed.
o It must have been specified with an image file specification
that uses only logicals defined with the DCL /SYSTEM and
/EXECUTIVE qualifiers.
In addition, the user issuing the RMU Verify command must have
OpenVMS SYSPRV in order for the routine to be activated.
The Noroutines qualifier specifies that routine interface not be
verified.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
By default, Oracle RMU does not verify any routines.
35.4.15 – Segmented Strings
Segmented_Strings
Nosegmented_Strings
Verifies all list (segmented string) data for each column, in
each table in any of the two types of storage areas: read/write
and read-only (on read/write disk devices). When you specify
the RMU Verify command with the All qualifier, all list data
(segmented strings) in all tables is verified in the database.
The Segmented_Strings qualifier can only be used with the Lareas
qualifier and has the following meanings when used with this
qualifier:
o RMU Verify command with the Lareas=* and the Segmented_Strings
qualifiers.
Segmented strings in all tables are verified.
o RMU Verify command with the Lareas=(LAREA_1, . . . ,LAREA_N)
and the Segmented_Strings qualifiers.
Segmented strings in tables LAREA_1, . . . ,LAREA_N are
verified.
If the Segmented_Strings qualifier is omitted, there is no
list data verification.
The Segmented_Strings qualifier verifies all list data in
each column of each row in the database. The verify operation
tries to fetch all pointer segments and all data segments from
the pointer segments, and verifies all header information,
including the total length of the segment, the number of
pointer segments, the number of data segments, and the length
of the longest segment for the list data.
35.4.16 – Snapshots
Snapshots
Verifies the snapshot area of the specified storage areas up
to the page header level. The Snapshots qualifier only performs
checksum verification of snapshot pages.
The Snapshots qualifier is valid only when you also specify the
Areas qualifier.
See the Usage Notes entry in this command for the rules that
determine which other qualifiers can be used in combination on
the same RMU Verify command line.
The Snapshots qualifier can be used with indirect file
references. See the Indirect-Command-Files Help entry for more
information.
By default, Oracle RMU does not verify snapshots.
35.4.17 – Start
Start=page-number
Specifies the first page to be verified. This qualifier is used
in conjunction with the Areas and Lareas qualifiers. If you do
not use the Start qualifier, the verification begins with the
first page of the storage area.
The Start qualifier is valid only when you specify the Areas or
Lareas qualifier also.
See the Usage Notes entry in this command for the rules that
determine which other qualifiers can be used in combination on
the same RMU Verify command line.
35.4.18 – Transaction Type
Transaction_Type=option
Sets the retrieval lock for the storage areas being verified.
Use one of the following keywords to control the transaction
mode:
o Automatic
When Transaction_Type=Automatic is specified, the transaction
type depends on the current database settings for snapshots
(enabled, deferred, or disabled), transaction modes available
to this user, and the standby status of the database.
o Read_Only
Starts a Read_Only transaction.
o Exclusive
Starts a Read_Write transaction and reserves the table for
Exclusive_Read.
o Protected
Starts a Read_Write transaction and reserves the table for
Protected_Read. Protected mode is the default.
o Shared
Starts a Read_Write transaction and reserves the table for
Shared_Read.
Use one of the following options with the keyword Isolation_
Level=[option] to specify the transaction isolation level:
o Read_Committed
o Repeatable_Read
o Serializable. Serializable is the default setting.
Refer to the SET TRANSACTION statement in the Oracle Rdb SQL
Reference Manual for a complete description of the transaction
isolation levels.
Specify the wait setting by using one of the following keywords:
o Wait
Waits indefinitely for a locked resource to become available.
Wait is the default behavior.
o Wait=n
The value you supply for n is the transaction lock timeout
interval. When you supply this value, Oracle Rdb waits n
seconds before aborting the wait and the RMU Verify session.
Specifying a wait timeout interval of zero is equivalent to
specifying Nowait.
o Nowait
Does not wait for a locked resource to become available.
See the Usage Notes entry in this command for the rules that
determine which qualifiers can be used in combination on the same
RMU Verify command line.
35.5 – Usage Notes
o To use the RMU Verify command for a database, you must have
the RMU$VERIFY privilege in the root file access control
list (ACL) for the database or the OpenVMS SYSPRV or BYPASS
privilege. You must also have the SQL DBADM privilege.
o The rules that determine which qualifiers can be used in
combination on the same RMU Verify command line are as
follows:
- The Incremental, Log, Output, and Transaction_Type
qualifiers can be used in combination with any other
qualifiers on the same RMU Verify command line.
- If the All qualifier is specified, the only other
qualifiers you can specify on the same command line are:
* Noroutines (or Nofunctions)
* Nosegmented_Strings
- If the All qualifier is not specified, then any combination
of the following qualifiers can be specified on the same
command line:
* Areas
* Constraints
* [No]Functions
* Indexes
* Lareas
* [No]Root
* [No]Routines
- You must specify the Areas qualifier to specify the
Checksum_Only or Snapshots qualifier.
- You must specify the Lareas qualifier to specify the
Segmented_Strings qualifier.
- You must specify either the Areas or Lareas qualifier to
specify the Start and End qualifiers.
- You cannot specify the Indexes qualifier on the same RMU
Verify command line with the Start and End qualifiers.
- You must specify the Indexes qualifier to specify the
[No]Data qualifier.
o You can significantly improve the performance of RMU Verify
for your database by employing the verification strategies
described in the Oracle Rdb Guide to Database Maintenance. In
addition, detected asynchronous prefetch should be enabled to
achieve the best performance of this command. Beginning with
Oracle Rdb V7.0, by default, detected asynchronous prefetch
is enabled. You can determine the setting for your database by
issuing the RMU Dump command with the Header qualifier.
If detected asynchronous prefetch is disabled, and you do not
want to enable it for the database, you can enable it for your
Oracle RMU operations by defining the following logicals at
the process level:
$ DEFINE RDM$BIND_DAPF_ENABLED 1
$ DEFINE RDM$BIND_DAPF_DEPTH_BUF_CNT P1
P1 is a value between 10 and 20 percent of the user buffer
count.
o If you use the RMU Convert command with the Nocommit qualifier
to convert a database created prior to Oracle Rdb Version
6.0, and then use the RMU Convert command with the Rollback
qualifier to revert to the previous database structure level,
subsequent RMU Verify commands might produce messages such as
the following:
%RMU-W-PAGTADINV, area RDB$SYSTEM, page 1
contains incorrect time stamp
expected between 14-APR-1992 15:55:25.74
and 24-SEP-1993 13:26:06.41, found:
Beginning in Oracle Rdb Version 6.0, the fast incremental
backup feature alters the page header of updated SPAM pages to
record which page ranges have been updated since the previous
full backup operation. The RMU Verify command in versions
of Oracle Rdb prior to Version 6.0 does not contain code to
understand the updated page header and issues the PAGTADINV
warning when encountering an updated SPAM page header. The
update page headers are only detected by the RMU Verify
command and do not affect the run-time operation of Oracle
Rdb. To correct the updated SPAM pages, you can use the RMU
Repair command with the Spams qualifier as follows:
$ RMU/VERIFY/ALL/NOLOG MF_PERSONNEL
%RMU-W-PAGTADINV, area RDB$SYSTEM, page 1
contains incorrect time stamp
expected between 14-APR-1992 15:55:25.74
and 24-SEP-1993 13:26:06.41, found:
$
$ RMU/REPAIR/SPAMS MF_PERSONNEL
%RMU-I-FULBACREQ, A full backup of this database should be performed
after RMU/REPAIR
$
$ RMU/VERIFY/ALL/NOLOG MF_PERSONNEL
$
o The RMU Verify command ignores any constraint that has
been disabled (with the SQL ALTER TABLE enable-disable
clause) unless you specify the constraint name in the
Constraints=(Constraints=list) qualifier of the RMU Verify
command. If the Constraints qualifier is specified without a
list, disabled constraints are ignored.
By specifying the name of a disabled constraint in the
Constraints=(Constraints=list) qualifier, you can check it
periodically without having to reenable it. You might use
this to provide a business rule in the database that needs
checking only occasionally. This is a useful practice if the
overhead of checking the constraint during operating hours
is too expensive, or if it is already being enforced by the
application.
o The number of work files used by the RMU Verify command is
controlled by the RDMS$BIND_SORT_WORKFILES logical name. The
allowable values are 1 through 10 inclusive, with a default
value of 2. The location of these work files can be specified
with device specifications, using the SORTWORKn logical
name (where n is a number from 0 to 9). See the OpenVMS
documentation set for more information on using SORT/MERGE.
See the Oracle Rdb7 Guide to Database Performance and Tuning
for more information on using these Oracle Rdb logical names.
Because two separate sort streams are used internally by the
RMU Verify command when the Index qualifier is specified,
the number of work files specified is used for each stream.
For example, if RDM$BIND_SORT_WORKFILES is defined to be 10,
twenty work files are created.
35.6 – Examples
Example 1
The following command verifies the entire mf_personnel database
because the All qualifier is specified:
$ RMU/VERIFY/ALL/LOG MF_PERSONNEL.RDB
Example 2
The following command verifies the storage areas EMPIDS_LOW,
EMPIDS_MID, and EMPIDS_OVER in the mf_personnel database:
$ RMU/VERIFY/AREAS=(EMPIDS_LOW,EMPIDS_MID,EMPIDS_OVER)/LOG -
_$ MF_PERSONNEL.RDB
Example 3
The following command performs only a checksum verification on
all the storage areas in the database called large_database. The
Checksum_Only qualifier quickly detects obvious checksum problems
with the database. If a checksum problem is found on a page, you
can dump the page by using the RMU Dump command, and verify the
appropriate logical areas and indexes.
$ RMU/VERIFY/AREAS=*/CHECKSUM_ONLY/LOG LARGE_DATABASE
Example 4
The following command verifies the Candidates and Colleges
tables:
$ RMU/VERIFY/LAREAS=(CANDIDATES,COLLEGES)/LOG MF_PERSONNEL.RDB
Example 5
The following example displays the behavior of the index
verification method Oracle RMU employs beginning in Oracle Rdb
V7.0. The first RMU Verify command shows the log output when the
command is issued under Oracle Rdb V6.1. The second RMU Verify
command shows the log output when the command is issued under
Oracle Rdb V7.0.
$ @SYS$LIBRARY:RDB$SETVER 6.1
$ SET DEF DB1:[V61]
$ RMU/VERIFY/INDEXES=EMP_EMPLOYEE_ID/DATA MF_PERSONNEL.RDB/LOG
%RMU-I-BGNROOVER, beginning root verification
%RMU-I-ENDROOVER, completed root verification
%RMU-I-DBBOUND, bound to database "DB1:[V61]MF_PERSONNEL.RDB;1"
%RMU-I-OPENAREA, opened storage area RDB$SYSTEM for protected retrieval
%RMU-I-BGNAIPVER, beginning AIP pages verification
%RMU-I-ENDAIPVER, completed AIP pages verification
%RMU-I-BGNABMSPM, beginning ABM pages verification
%RMU-I-OPENAREA, opened storage area MF_PERS_SEGSTR for protected retrieval
%RMU-I-ENDABMSPM, completed ABM pages verification
%RMU-I-BGNNDXVER, beginning verification of index EMP_EMPLOYEE_ID
%RMU-I-OPENAREA, opened storage area EMPIDS_LOW for protected retrieval
%RMU-I-OPENAREA, opened storage area EMPIDS_MID for protected retrieval
%RMU-I-OPENAREA, opened storage area EMPIDS_OVER for protected retrieval
%RMU-I-ENDNDXVER, completed verification of index EMP_EMPLOYEE_ID
%RMU-I-CLOSAREAS, releasing protected retrieval lock on all storage areas
%RMU-S-ENDVERIFY, elapsed time for verification : 0 00:00:09.14
$ @SYS$LIBRARY:RDB$SETVER 7.0
$ SET DEF DB1:[V70]
$ RMU/VERIFY/INDEXES=EMP_EMPLOYEE_ID/DATA MF_PERSONNEL.RDB/LOG
%RMU-I-BGNROOVER, beginning root verification
%RMU-I-ENDROOVER, completed root verification
%RMU-I-DBBOUND, bound to database "DB1:[V70]MF_PERSONNEL.RDB;1"
%RMU-I-OPENAREA, opened storage area RDB$SYSTEM for protected retrieval
%RMU-I-BGNAIPVER, beginning AIP pages verification
%RMU-I-ENDAIPVER, completed AIP pages verification
%RMU-I-BGNABMSPM, beginning ABM pages verification
%RMU-I-ENDABMSPM, completed ABM pages verification
%RMU-I-BGNNDXVER, beginning verification of index EMP_EMPLOYEE_ID
%RMU-I-OPENAREA, opened storage area EMPIDS_LOW for protected retrieval
%RMU-I-OPENAREA, opened storage area EMPIDS_MID for protected retrieval
%RMU-I-OPENAREA, opened storage area EMPIDS_OVER for protected retrieval
%RMU-I-ENDNDXVER, completed verification of index EMP_EMPLOYEE_ID
%RMU-I-BSGPGLARE, beginning verification of EMPLOYEES logical area
as part of EMPIDS_LOW storage area
%RMU-I-ESGPGLARE, completed verification of EMPLOYEES logical area
as part of EMPIDS_LOW storage area
%RMU-I-BSGPGLARE, beginning verification of EMPLOYEES logical area
as part of EMPIDS_MID storage area
%RMU-I-ESGPGLARE, completed verification of EMPLOYEES logical area
as part of EMPIDS_MID storage area
%RMU-I-BSGPGLARE, beginning verification of EMPLOYEES logical area
as part of EMPIDS_OVER storage area
%RMU-I-ESGPGLARE, completed verification of EMPLOYEES logical area
as part of EMPIDS_OVER storage area
%RMU-I-IDXVERSTR, Beginning index data verification of logical area 69
(EMPLOYEES).
%RMU-I-IDXVEREND, Completed data verification of logical area 69.
%RMU-I-IDXVERSTR, Beginning index data verification of logical area 70
(EMPLOYEES).
%RMU-I-IDXVEREND, Completed data verification of logical area 70.
%RMU-I-IDXVERSTR, Beginning index data verification of logical area 71
(EMPLOYEES).
%RMU-I-IDXVEREND, Completed data verification of logical area 71.
%RMU-I-CLOSAREAS, releasing protected retrieval lock on all storage areas
%RMU-S-ENDVERIFY, elapsed time for verification : 0 00:00:11.36
Example 6
The following example loads data into a table, verifies
the table, and then identifies loaded rows that violated a
constraint.
Because the Noconstraints qualifier is specified with the RMU
Load command, data that violates database integrity might be
added to the database. The second RMU Verify command verifies the
table that was just loaded and reveals that data that violates
constraints on the table was indeed loaded.
An SQL command is issued to determine which rows violated the
constraint so that they can either be removed from the database,
or added to the EMPLOYEES table to restore database integrity.
The final RMU Verify command checks the constraint again to
ensure that changes made have restored the integrity of the
database.
$ !
$ ! Load data into the JOB_HISTORY table of the mf_personnel database.
$ ! Specify the Noconstraints qualifier:
$ !
$ RMU/LOAD/RECORD_DEFINITION=(FILE=JOB_HIST.RRD, FORMAT=TEXT) -
_$ MF_PERSONNEL.RDB JOB_HISTORY JOB_HIST.UNL/NOCONSTRAINTS
%RMU-I-DATRECREAD, 18 data records read from input file.
%RMU-I-DATRECSTO, 18 data records stored.
$ !
$ ! Verify the JOB_HISTORY table:
$ !
$ RMU/VERIFY/CONSTRAINTS=(TABLE=JOB_HISTORY) MF_PERSONNEL.RDB
%RMU-W-CONSTFAIL, Verification of constraint "JOB_HISTORY_FOREIGN1"
has failed.
$ !
$ ! Issue SQL statements to determine what the definition of the
$ ! constraint is and which of the loaded rows violated
$ ! the constraint. Then issue an SQL command to insert data that will
$ ! restore the data integrity of the database:
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL.RDB';
SQL> SHOW TABLE JOB_HISTORY
.
.
.
JOB_HISTORY_FOREIGN1
Foreign Key constraint
Column constraint for JOB_HISTORY.EMPLOYEE_ID
Evaluated on COMMIT
Source:
JOB_HISTORY.EMPLOYEE_ID REFERENCES EMPLOYEES (EMPLOYEE_ID)
.
.
.
SQL> SELECT DISTINCT(EMPLOYEE_ID)
cont> FROM JOB_HISTORY
cont> WHERE NOT EXISTS
cont> (SELECT *
cont> FROM EMPLOYEES AS E
cont> WHERE E.EMPLOYEE_ID = JOB_HISTORY.EMPLOYEE_ID);
EMPLOYEE_ID
10164
10165
10166
10167
10168
10169
6 rows selected
SQL> INSERT INTO EMPLOYEES (EMPLOYEE_ID, LAST_NAME)
cont> VALUES ('10164', 'Smith');
SQL> INSERT INTO EMPLOYEES (EMPLOYEE_ID, LAST_NAME)
cont> VALUES ('10165', 'Frederico');
SQL> INSERT INTO EMPLOYEES (EMPLOYEE_ID, LAST_NAME)
cont> VALUES ('10166', 'Watts');
SQL> INSERT INTO EMPLOYEES (EMPLOYEE_ID, LAST_NAME)
cont> VALUES ('10167', 'Risley');
SQL> INSERT INTO EMPLOYEES (EMPLOYEE_ID, LAST_NAME)
cont> VALUES ('10168', 'Pietryka');
SQL> INSERT INTO EMPLOYEES (EMPLOYEE_ID, LAST_NAME)
cont> VALUES ('10169', 'Jussaume');
SQL> COMMIT;
SQL> EXIT
$ !
$ ! Check that data integrity has been restored:
$ !
$ RMU/VERIFY/CONSTRAINTS=(CONSTRAINTS=JOB_HISTORY_FOREIGN1, -
_$ TABLE=JOB_HISTORY) MF_PERSONNEL.RDB
$ !
$ ! No messages are returned. Data integrity has been restored.
Example 7
The following example creates an external function in which
the external name is incorrect. When the function is verified,
Oracle RMU cannot find the entry point and returns an error. The
external function is then dropped and then re-created correctly.
The verification now succeeds:
$ ! Attach to database and create a function. The external name is
$ ! mistyped:
$ !
SQL> ATTACH 'filename mf_personnel.rdb';
SQL> create function SQRT (in double precision) returns double precision;
cont> external name MTH$SORT location 'SYS$SHARE:MTHRTL'
cont> language GENERAL
cont> GENERAL PARAMETER STYLE;
SQL> COMMIT;
SQL> EXIT;
$ !
$ ! Verify the function:
$ !
$ RMU/VERIFY/ROUTINES MF_PERSONNEL.RDB
%RMU-E-NOENTRPT, No entry point found for external routine SQRT.
Image name is SYS$SHARE:MTHRTL.
Entry point is MTH$SORT.
$ !
$ ! Oracle RMU cannot find the entry point. Drop the
$ ! function and reenter correctly:
$ !
$ SQL
SQL> ATTACH 'FILENAME mf_personnel.rdb';
SQL> DROP FUNCTION SQRT;
SQL> create function SQRT (in double precision) returns double precision;
cont> external name MTH$SQRT location 'SYS$SHARE:MTHRTL'
cont> language GENERAL
cont> GENERAL PARAMETER STYLE;
SQL> COMMIT;
SQL> EXIT;
$ !
$ ! Verification is now successful:
$ !
$ RMU/VERIFY/ROUTINES MF_PERSONNEL.RDB
Example 8
The following example demonstrates that the RMU Verify command
verifies disabled constraints only when you explicitly specify
the disabled constraint.
$ SQL
SQL> ATTACH 'FILENAME MF_PERSONNEL.RDB';
SQL> -- Disable the EMP_SEX_VALUES constraint.
SQL> ALTER TABLE EMPLOYEES DISABLE CONSTRAINT EMP_SEX_VALUES;
SQL> COMMIT;
SQL> -- Insert a value that violates the EMP_SEX_VALUES constraint.
SQL> INSERT INTO EMPLOYEES
cont> (EMPLOYEE_ID, LAST_NAME, SEX)
cont> VALUES ('99999', 'JICKLING', 'G');
1 row inserted
SQL> COMMIT;
SQL> EXIT;
$ !
$ ! The following two verify commands do not return an error
$ ! because the disabled constraint is not explicitly specified.
$ !
$ RMU/VERIFY MF_PERSONNEL.RDB
$ RMU/VERIFY MF_PERSONNEL.RDB/CONSTRAINTS
$ !
$ ! The following verify command returns an warning message to
$ ! inform you that data that violates the disabled constraint
$ ! has been inserted into the database.
$ !
$ RMU/VERIFY MF_PERSONNEL.RDB/CONSTRAINT=(CONSTRAINT=EMP_SEX_VALUES)
%RMU-W-CONSTFAIL, Verification of constraint "EMP_SEX_VALUES" has failed.
36 – New Features
Refer to the release notes for this version of Oracle RMU for all
new and changed features.
37 – rrd_file_syntax
The record definition files (.rrd) used by the RMU Load, RMU
Unload, and RMU Analyze commands are used to describe the field
data types and field ordering for binary and delimited text data
files. The .rrd files contain a simple language similar to that
accepted by the CDO interface of the Oracle CDD Repository. The
RMU Unload command automatically generates a record definition
file from the table definition in the database.
This appendix describes the .rrd language which is accepted
by the RMU Load command. It describes a useful subset of the
language supported by RMU. Clauses from CDO which RMU accepts but
ignores are not described.
37.1 – DEFINE FIELD statement
Each record definition file must include at least one DEFINE
FIELD statement to describe the data type of a field in the
unloaded record. This statement has two formats:
o a format that defines a new name
define field name_string datatype is text size is 20 characters.
o a format that references another, previously defined, field
define field first_name based on name_string.
RMU Unload generates the DEFINE FIELD statement with just
the DATATYPE clause. The full syntax is shown in DEFINE FIELD
Statement.
Figure 2 DEFINE FIELD Statement
(B)0[m[1mdefine-field = [m [1m [m
[1m [m [1m [m [1m [m
[1;4mDEFINE[m[1m [1;4mFIELD[m[1m <name> qwqqqqqqqqqqqqq> [1;4mBASED[m[1m [1;4mON[m[1m <name> qqqqqqqqqqqqqqqqwq> . q>[m
[1m [m [1m mqwqwq>[m [1;4mDATATYPE[m[1m IS[m [1mqqqqqqq> datatypes qqqqqqwqwqj [m
[1m [m [1m [m [1mx[m [1mtq> [1;4mDESCRIPTION[m[1m IS qqqq> /* comment */ qqu x [m
[1m [m [1m [m [1mx[m [1mmq> [1;4mFILLER[m[1m qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj x [m
[1m [m [1m [m [1mmqqqqqqqqqqqqqqqqqqqqq <qqqqqqqqqqqqqqqqqqqqqj [m
[1m [m [1m [m [1m [m
[1m [m
The following example of the DEFINE FIELD statement is more
complete, showing the use of annotations (DESCRIPTION clause)
and based-on fields.
define field name_string
description is
/* This a generic string type to be used for based on */
datatype is text size is 20 characters.
define field first_name
based on name_string.
define field last_name
based on name_string.
define record PERSON
description is
/* Record which describes the PERSON.DAT RMS file */.
first_name.
last_name.
end.
37.2 – DEFINE RECORD statement
The DEFINE RECORD statement defines the ordering of the fields
within the file. A field may only be used once. The name of
the field is not used for column name matching unless the
Corresponding qualifier is used with the RMU Load command.
Figure 3 DEFINE RECORD Statement
(B)0[m[1mdefine-record = [m
[1m [m
[1;4mDEFINE[m[1m [1;4mRECORD[m[1m <name> qwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwq> . qk [m
[1m mq> [1;4mDESCRIPTION[m[1m IS qqqqq> /* comment */ qqqqj x [m
[1m lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq [m [1m<qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj[m
[1m mwq> <fieldname> qq> alignment-clause qqq> . qwqk [m
[1m mqqqqqqqqqqqqqqqqqqqq <qqqqqqqqqqqqqqqqqqqqqj x [m
[1m lqqqqqqqqqqqqqqqqqqqqqqqqqq <qqqqqqqqqqqqqqqqqqj [m
[1m mqq> [1;4mEND[m[1m qwqqqqqqqqqqqwqwqqqqqqqqqqqqwqq> . qqqqqqqq> [m
[1m mq> <name> qj mq> [1;4mRECORD[m[1m qqj [m
[1m [m
(B)0[m[1malignment-clause = [m
[1m [m
[1mqwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwqq> [m
[1m mq> [1;4mALIGNED[m[1m [1;4mON[m[1m qqwq> [1;4mBYTE[m[1m qqqqqqwqwqqqqqqqqqqqqqwqj [m
[1m tq> [1;4mWORD[m[1m qqqqqqu mq> [1;4mBOUNDARY[m[1m qj [m
[1m tq> [1;4mLONGWORD[m[1m qqu [m
[1m tq> [1;4mQUADWORD[m[1m qqu [m
[1m mq> [1;4mOCTAWORD[m[1m qqj [m
[1m [m
The ALIGNED ON clause can be used to adjust for alignment added
explicitly or implicitly by host language applications. For
instance, on OpenVMS Alpha many 3GL language compilers naturally
align fields to take advantage of the Alpha processor hardware
which executes more efficiently when data is well aligned. The
default is BYTE alignment.
In the following example, field C is expected to start on a
quadword boundary, so A is assigned the first longword, the
second longword is ignored, and finally the C is assigned the
last longword value.
define field A datatype is signed longword.
define field C datatype is signed longword.
define record RMUTEST.
A .
C aligned on quadword boundary.
end RMUTEST record.
37.2.1 – Usage notes
o When the DCL verify process is enabled using the DCL SET
VERIFY command or the DCL F$VERIFY lexical function, RMU Load
writes the .rrd file being processed to SYS$OUTPUT.
o There is no equivalent to the VARCHAR or VARYING STRING data
types in the record definition language because there is
no support for these types in the OpenVMS Record Management
Services (RMS) environment.
o The VARCHAR or VARYING STRING data type is a two-part type
with an UNSIGNED WORD (16 bit integer) length prefix with a
fixed TEXT portion. The length defines the actual data in the
string.
There is no equivalent to the VARCHAR or VARYING STRING data
types in the record definition language as this type is not
supported by the OpenVMS Record Management Services (RMS)
environment.
If you unload a VARCHAR column then it will be converted to
a fixed length (space padded) TEXT field. However, TEXT to
VARCHAR load and unload is handled appropriately when using
the delimited format. In this format RMU Unload only outputs
the text as specified by the length prefix of the VARCHAR
column. Likewise, RMU Load uses the length of the delimited
string to set the length in the database.
o If a field is not to be used during the load into the table,
it can be ignored during the load using the FILLER attribute.
This allows RMU Load to use a data file which has more fields
than there are columns in the database table.
o The <name> referenced in the END RECORD clause must be the
same as the name defined by the DEFINE RECORD statement.
o The record definition files are not used when the Record_
definition qualifier is omitted. In this case RMU Unload
generates a structured internal file format which contains
both the record definition and the data. This format allows
the unloading of LIST OF BYTE VARYING columns and NULL values.
This format is the same as that generated by SQL EXPORT for
its interchange (.rbr) file. Use the RMU Dump Export command
to format the contents of this file for display.
$ rmu/unload mf_personnel employees employees.unl
$ rmu/dump/export/nodata employees.unl
37.3 – Additional Data Types
The data types that are supported by Oracle Rdb are described in
Oracle Rdb SQL Reference Manual.
Figure 4 Data Types
(B)0[m[1mdatatypes = [m
[1m [m
[1mqqwq> date-time-types qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwq>[m
[1m tq> [1;4mTEXT[m[1m qq> character-size-clause qq> character-set-clause qqqqqqqqqu [m
[1m tq> [1;4mF_FLOATING[m[1m qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqu [m
[1m tq> [1;4mG_FLOATING[m[1m qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqu [m
[1m tq> [1;4mLEFT[m[1m [1;4mSEPARATE[m[1m [1;4mNUMERIC[m[1m qwq> scale-clause qq> numeric-size-clause qu [m
[1m tq> [1;4mPACKED[m[1m [1;4mDECIMAL[m[1m qqqqqqqqu x [m
[1m tq> [1;4mUNSIGNED[m[1m [1;4mNUMERIC[m[1m qqqqqqj x [m
[1m mq> [1;4mSIGNED[m[1m qwq> [1;4mBYTE[m[1m qqqqqwqq> scale-clause qqqqqqqqqqqqqqqqqqqqqqqqqj [m
[1m tq> [1;4mWORD[m[1m qqqqqu [m
[1m tq> [1;4mLONGWORD[m[1m qu [m
[1m mq> [1;4mQUADWORD[m[1m qj [m
[1m [m
(B)0[m[1mnumeric-size-clause = [m
[1m [m
[1mqwqqqqqqqqqqqqqwqqq> <number> qqwqqqqqqqqqqqwqq> [m
[1m mq> [1;4mSIZE[m[1m IS qqj [m [1m mq> [1;4mDIGITS[m[1m qj [m
[1m [m
(B)0[m[1mcharacter-size-clause = [m
[1m [m
[1mqwqqqqqqqqqqqqqqqwq> <number> qqwqqqqqqqqqqqqqqqwqq> [m
[1m mqq> [1;4mSIZE[m[1m IS qqqj mq> [1;4mCHARACTERS[m[1m qj [m
[1m [m
(B)0[m[1mscale-clause = [m
[1m [m
[1mqqwqqqqqqqqqqqqqqqqqwqq> <number> qqq> [m
[1m mqq> [1;4mSCALE[m[1m IS qqqqj [m
[1m [m
(B)0[m[1mcharacter-set-clause = [m
[1m [m
[1mqwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwqq> [m
[1m mq> [1;4mCHARACTER[m[1m [1;4mSET[m[1m IS qqqqqqq> <cset-name> qqj [m
[1m [m
The cset-name is any character set supported by Oracle Rdb. These
character sets are expanded from release to release. Please refer
to the Oracle Rdb SQL Reference Manual for new character sets and
more information on the character sets listed below.
When several character sets are available for the same language
(such as Kanji and Hanyu), each is based on a different local or
international standard that differs from the others in format and
structure. For instance, SHIFT_JIS is widely used with Microsoft
Windows systems in Japan, but differs in format from the DEC_
KANJI character set supported by Hewlett Packard Company's
DECWindows product.
Table 22 Character sets supported by Oracle RMU Load
Character
Set Description
ARABIC Arabic characters as defined by the ASMO 449 and
ISO9036 standards
BIG5 A set of characters used by the Taiwan information
industry
DEC_HANYU Traditional Chinese characters (Hanyu) as used
in Taiwan and defined by standard CNS11643:1986,
supplemental characters as defined by DTSCC and
ASCII
DEC_HANZI Chinese (Bopomofo) characters as defined by
standard GB2312:1980 and ASCII characters
DEC_KANJI Japanese characters as defined by the JIS
X0208:1990 standard, Hankaku Katakana characters
as defined by JIS X0201:1976 prefixed by SS2
(8E hex), user-defined characters, and ASCII
characters
DEC_KOREAN Korean characters as defined by standard KS
C5601:1987 and ASCII characters
DEC_MCS A set of international alphanumeric characters,
including characters with diacritical marks
DEC_SICGCC Traditional Chinese characters (Hanyu) as used in
Taiwan and defined by standard CNS11643:1986 and
ASCII
DEVANAGARI Devanagari characters as defined by the ISCII:1988
standard
DOS_LATIN1 DOS Latin 1 code
DOS_LATINUS DOS Latin US code
HANYU Traditional Chinese characters (Hanyu) as used in
Taiwan and defined by the standard CNS11643:1986
HANZI Chinese (Bopomofo) characters as defined by
standard GB2312:1980
HEX Translation of text data to and from hexadecimal
data
ISOLATINARABIC Arabic characters as defined by the ISO/IEC 8859-
6:1987 standard
ISOLATINCYRILLICyrillic characters as defined by the ISO/IEC
8859-5:1987 standard
ISOLATINGREEK Greek characters as defined by the ISO/IEC 8859-
7:1987 standard
ISOLATINHEBREW Hebrew characters as defined by the ISO/IEC 8859-
8:1987 standard
KANJI Japanese characters as defined by the JIS
X0208:1990 standard and user-defined characters
KATAKANA Japanese phonetic alphabet (Hankaku Katakana), as
defined by standard JIS X0201:1976
KOREAN Korean characters as defined by standard KS
C5601:1987
SHIFT_JIS Japanese characters as defined by the JIS
X0208:1990 standard using Shift_JIS specific
encoding scheme, Hankaku Katakana characters as
defined by JIS X0201:1976, and ASCII characters
TACTIS Thai characters based on TACTIS (Thai API
Consortium/Thai Industrial Standard) which is
a combination of ISO 646-1983 and TIS 620-2533
standards
UNICODE Unicode characters as described by Unicode
Standard and ISO/IEC 10646 transformation format
UTF-16
UTF8 Unicode characters as described by Unicode
Standard and ISO/IEC 10646 UTF-encoding form
WIN_ARABIC MS Windows Code Page 1256
8-Bit Latin/Arabic
WIN_CYRILLIC MS Windows Code Page 1251
8-Bit Latin/Cyrillic
WIN_GREEK MS Windows Code Page 1253
8-Bit Latin/Greek
WIN_HEBREW MS Windows Code Page 1255
8-Bit Latin/Hebrew
WIN_LATIN1 MS Windows Code Page 1252
8-Bit West European
37.4 – Date-Time Syntax
The date-time syntax in .rrd files generated by the RMU
Unload command with the Record_Definition=(File=file) command
is compatible with the date-time syntax support of Oracle
CDD/Repository V6.1 and later versions.
The date-time data type has the following syntax in the .rrd
file.
(B)0[m[1mdate-time-types = [m
[1m [m
[1mqqwq> [1;4mDATE[m[1m qwqqqqqqqqqqwqqqqqqqqqqqqqqqqqwqq> [m
[1m x tq> [1;4mANSI[m[1m qu x [m
[1m x mq> [1;4mVMS[m[1m qqqj x [m
[1m tq> [1;4mTIME[m[1m qqq> scale-clause qqqqqqqqqqqqu[m
[1m tq> [1;4mTIMESTAMP[m[1m qq> scale-clause qqqqqqqqu[m
[1m mq> [1;4mINTERVAL[m[1m qqq> interval-qualifier qqj [m
[1m [m
(B)0[m[1minterval-qualifier = [m [1m [m
[1m [m [1m [m
[1mqqwq> [1;4mYEAR[m[1m qqq> numeric-size-clause qwqqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqwq>[m
[1m x [m [1mmq> [1;4mTO[m[1m [1;4mMONTH[m[1m qj [m [1mx[m
[1m tq> [1;4mMONTH[m[1m qq> numeric-size-clause qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqu[m
[1m tq> [1;4mDAY[m[1m qqqq> numeric-size-clause qwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwu[m
[1m x mq> [1;4mTO[m[1m qwq> [1;4mHOUR[m [1mqqqqqqqqqqqqqqqqqqqqqqux[m
[1m x [m [1m tq> [1;4mMINUTE[m[1m qqqqqqqqqqqqqqqqqqqqux[m
[1m x [m [1m mq> [1;4mSECOND[m [1mq> scale-clause qqqqjx[m
[1m tq> [1;4mHOUR[m[1m qqq> numeric-size-clause qwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwqu[m
[1m x mq> [1;4mTO[m[1m qwq> [1;4mMINUTE[m[1m qqqqqqqqqqqqqqqqqqqu x[m [1m [m
[1m x mq> [1;4mSECOND[m[1m q> scale-clause qqqj x[m
[1m tq> [1;4mMINUTE[m[1m q> numeric-size-clause qwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqu [m
[1m x mq> [1;4mTO[m[1m [1;4mSECOND[m [1mqqq> scale-clause qqqqj[m [1m x [m
[1m mq> [1;4mSECOND[m[1m q> seconds-clause qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj [m
(B)0[m[1mscale-clause = [m
[1m [m
[1mqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqqqqq[m
[1m mq> [1;4mSCALE[m[1m <numeric-literal> qqj [m
[1m [m
(B)0[m[1mnumeric-size-clause = [m
[1m [m
[1mqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqq> [m
[1m mqwqqqqqqqqqqqqqwqqqqqqq> <numeric-literal>[m [1mqqk x [m
[1m mq> [1;4mSIZE[m[1m IS qqj[m [1m [m [1m x[m [1mx[m
[1mlqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj[m [1mx[m
[1mmqwqqqqqqqqqqqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj[m
[1mmq>[m [1;4mDIGITS[m[1m qqj[m
(B)0[m[1mseconds-clause = [m
[1m [m
[1mqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqq> [m
[1m mqwqqqqqqqqqqqqqwqqqqqqqqqqqq> <numeric-literal-1> qqqqqqqk x [m
[1mmqq>[m [1;4mSIZE[m[1m IS qj[m [1m [m [1m [m [1m x[m [1mx[m
[1m lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj[m [1mx[m
[1m mqwqqqqqqqqqqqqqwqqwqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqwqqqqj[m
[1m [m [1mmq> [1;4mDIGITS[m[1m qqj mq> [1;4mSCALE[m[1m <numeric-literal-2> qqqqj[m
[1m [m
Note that SCALE values must be between 0 and -2 and that SIZE IS
values must be between 2 and 9.
The following are examples of typical field definitions for date-
time data types in .rrd files:
DEFINE FIELD A DATATYPE IS DATE.
DEFINE FIELD B DATATYPE IS DATE ANSI.
DEFINE FIELD C DATATYPE IS INTERVAL DAY SIZE IS 2 DIGITS.
DEFINE FIELD D DATATYPE IS INTERVAL DAY SIZE IS 2 DIGITS TO HOUR.
DEFINE FIELD E DATATYPE IS INTERVAL DAY SIZE IS 2 DIGITS TO
SECOND SCALE -2.
DEFINE FIELD F DATATYPE IS INTERVAL HOUR SIZE IS 4 DIGITS.
DEFINE FIELD G DATATYPE IS INTERVAL HOUR SIZE IS 2 DIGITS TO MINUTE.
DEFINE FIELD H DATATYPE IS INTERVAL MINUTE SIZE IS 2 DIGITS TO
SECOND SCALE -2.
DEFINE FIELD I DATATYPE IS INTERVAL SECOND SIZE IS 2 DIGITS SCALE -2.
DEFINE FIELD J DATATYPE IS TIME.
DEFINE FIELD K DATATYPE IS TIME SCALE -1.
DEFINE FIELD L DATATYPE IS TIMESTAMP SCALE -2.
DEFINE FIELD M DATATYPE IS INTERVAL YEAR SIZE IS 3 DIGITS TO MONTH.
38 – Using LogMiner for Rdb
Oracle Rdb after-image journal (.aij) files contain a wealth
of useful information about the history of transactions in a
database. After-image journal files contain all of the data
needed to perform database recovery. These files record every
change made to data and metadata in the database. The LogMiner
for Rdb feature provides an interface to the data record contents
of Oracle Rdb after-image journal files. Data records that are
added, updated, or deleted by committed transactions may be
extracted (unloaded) from the .aij files in a format suitable
for subsequent loading into another database or for use by user-
written application programs.
Oracle Rdb after-image journaling protects the integrity of your
data by recording all changes made by committed transactions to a
database in a sequential log or journal file. Oracle Corporation
recommends that you enable after-image journaling to record your
database transaction activity between full backup operations
as part of your database restore and recovery strategy. The
after-image journal file is also used to enable several database
performance enhancements (such as the fast commit, row cache, and
hot standby features).
See the Oracle Rdb7 Guide to Database Maintenance for more
information about setting up after-image journaling.
To use LogMiner for Rdb, follow these steps:
1. Enable the database for LogMiner operation using the RMU Set
Logminer command. See Set Logminer for additional information.
2. Back up the after-image journal file using the Quiet_Point
qualifier to the RMU Backup command.
3. Extract changed records using the RMU Unload After_Journal
command. See the Unload After_Journal help topic for
additional information.
38.1 – Restrictions and Limitations
The following restrictions exist for the LogMiner for Rdb
feature:
o Temporary tables cannot be extracted. Modifications to
temporary tables are not written to the after-image journal
file and, therefore, are not available to LogMiner for Rdb.
o Optimized after-image journal files cannot be used as input
to the LogMiner for Rdb. Information needed by the RMU Unload
After_Journal command is removed by the optimization process.
o Records removed from tables using the SQL TRUNCATE TABLE
statement are not extracted. The SQL TRUNCATE TABLE statement
does not journal each individual data record being removed
from the database.
o Records removed by dropping tables using the SQL DROP TABLE
statement are not extracted. The SQL DROP TABLE statement does
not journal each individual data record being removed from the
database.
o Tables that use the vertical record partitioning (VRP) feature
cannot be extracted using LogMiner for Rdb. LogMiner software
currently does not detect these tables. A future release
of Oracle Rdb will detect and reject access to vertically
partitioned tables.
o Segmented string data (BLOB) cannot be extracted using
LogMiner for Rdb. Because the segmented string data is
related to the base table row by means of a database key,
there is no convenient way to determine what data to extract.
Additionally, the data type of an extracted column is changed
from LIST OF BYTE VARYING to BIGINT. This column contains the
DBKEY of the original BLOB data. Therefore, the contents of
this column should be considered unreliable.
o COMPUTED BY columns in a table are not extracted. These
columns are not stored in the after-image journal file.
o VARCHAR fields are not space padded in the output file. The
VARCHAR data type is extracted as a 2-byte count field and a
fixed-length data field. The 2-byte count field indicates the
number of valid characters in the fixed-length data field. Any
additional contents in the data field are unpredictable.
o You cannot extract changes to a table when the table
definition is changed within an after-image journal file.
Data definition language (DDL) changes to a table are not
allowed within an .aij file being extracted. All records in an
.aij file must be the current record version. If you are going
to perform DDL operations on tables that you wish to extract
using the LogMiner for Rdb, you should:
1. Back up your after-image journal files.
2. Extract the .aij files using the RMU Unload After_Journal
command.
3. Make the DDL changes.
o Do not use the OpenVMS Alpha High Performance Sort/Merge
utility (selected by defining the logical name SORTSHR to
SYS$SHARE:HYPERSORT) when using LogMiner for Rdb. HYPERSORT
supports only a subset of the library sort routines that
LogMiner requires. Make sure that the SORTSHR logical name
is not defined to HYPERSORT.
38.2 – Information Returned
LogMiner for Rdb appends several output fields to the data
fields, creating an output record. The output record contains
fixed-length fields in a binary data format (that is, integer
fields are not converted to text strings). The data fields
correspond to the extracted table columns. This information
may or may not be required by all applications and readers of
the data. There is currently no available method to restrict or
reorder the output fields.
Extracted data field contents are the fields that are actually
stored in the Oracle Rdb database. COMPUTED BY fields are not
extracted because they are not stored in the database or in the
after-image journal file. Segmented string (BLOB) contents are
not extracted.
Output Fields describes the output fields and data types of an
output record.
Table 23 Output Fields
Byte
Field Name Data Type LengthDescription
ACTION CHAR (1) 1 Indicates record state.
"M" indicates an insert or
modify action. "D" indicates a
delete action. "E" indicates
stream end-of-file (EOF)
when a callback routine is
being used. "P" indicates
a value from the command
line Parameter qualifier
when a callback routine is
being used (see Parameter
qualifier). "C" indicates
transaction commit information
when the Include=Action=Commit
qualifier is specified.
RELATION_ CHAR (31) 31 Table name. Space padded to 31
NAME characters.
RECORD_TYPE INTEGER 4 The Oracle Rdb internal
(Longword) relation identifier.
DATA_LEN SMALLINT 2 Length, in bytes, of the data
(Word) record content.
NBV_LEN SMALLINT 2 Length, in bits, of the null
(Word) bit vector content.
DBK BIGINT 8 Records logical database key.
(Quadword) The database key is a 3-field
structure containing a 16-
bit line number, a 32-bit
page number and a 16-bit area
number.
START_TAD DATE VMS 8 Date-time of the start of the
(Quadword) transaction.
COMMIT_TAD DATE VMS 8 Date-time of the commitment of
(Quadword) the transaction.
TSN BIGINT 8 Transaction sequence number of
(Quadword) the transaction that performed
the record operation.
RECORD_ SMALLINT 2 Record version.
VERSION (Word)
Record Data Varies Actual data record field
contents.
Record NBV BIT VECTOR Null bit vector. There is
(array of one bit for each field in the
bits) data record. If a bit value
is 1, the corresponding field
is NULL; if a bit value is
0, the corresponding field
is not NULL and contains an
actual data value. The null
bit vector begins on a byte
boundary. Any extra bits in
the final byte of the vector
after the final null bit are
unused.
38.3 – Record Definition Prefix
An RMS file containing the record structure definition for the
output file can be used as an input file to the RMU Load command
if extracted data is to be loaded into an Oracle Rdb database.
The record description uses the CDO record and field definition
format (this is the format used by the RMU Load and RMU Unload
commands when the Record_Definition qualifier is used). The
default file extension is .rrd.
The record definition for the fields that LogMiner for Rdb
writes to the output is shown in the following example. These
fields can be manually appended to a record definition file
for the actual user data fields being unloaded. Alternately,
the Record_Definition qualifier can be used with the Table
qualifier or within an Options file to automatically create the
record definition file. This can be used to load a transaction
table within a database. A transaction table is the output that
LogMiner for Rdb writes to a table consisting of sequential
transactions performed in a database.
DEFINE FIELD RDB$LM_ACTION DATATYPE IS TEXT SIZE IS 1.
DEFINE FIELD RDB$LM_RELATION_NAME DATATYPE IS TEXT SIZE IS 31.
DEFINE FIELD RDB$LM_RECORD_TYPE DATATYPE IS SIGNED LONGWORD.
DEFINE FIELD RDB$LM_DATA_LEN DATATYPE IS SIGNED WORD.
DEFINE FIELD RDB$LM_NBV_LEN DATATYPE IS SIGNED WORD.
DEFINE FIELD RDB$LM_DBK DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD RDB$LM_START_TAD DATETYPE IS DATE
DEFINE FIELD RDB$LM_COMMIT_TAD DATATYPE IS DATE
DEFINE FIELD RDB$LM_TSN DATATYPE IS SIGNED QUADWORD.
DEFINE FIELD RDB$LM_RECORD_VERSION DATATYPE IS SIGNED WORD.
38.4 – SQL Table Definition Prefix
The SQL record definition for the fields that LogMiner for Rdb
writes to the output is shown in the following example. These
fields can be manually appended to the table creation command
for the actual user data fields being unloaded. Alternately, the
Table_Definition qualifier can be used with the Table qualifier
or within an Options file to automatically create the SQL
definition file. This can be used to create a transaction table
of changed data.
SQL> create table MYLOGTABLE (
cont> RDB$LM_ACTION CHAR,
cont> RDB$LM_RELATION_NAME CHAR (31),
cont> RDB$LM_RECORD_TYPE INTEGER,
cont> RDB$LM_DATA_LEN SMALLINT,
cont> RDB$LM_NBV_LEN SMALLINT,
cont> RDB$LM_DBK BIGINT,
cont> RDB$LM_START_TAD DATE VMS,
cont> RDB$LM_COMMIT_TAD DATE VMS,
cont> RDB$LM_TSN BIGINT,
cont> RDB$LM_RECORD_VERSION SMALLINT ...);
38.5 – Segmented String Columns
Segmented string (also called BLOB or LIST OF BYTE VARYING)
column data is not extracted. However, the field definition
itself is extracted as a quadword integer representing the
database key of the original segmented string data. In generated
table definition or record definition files, a comment is added
indicating that the segmented string data type is not supported
by LogMiner for Rdb.
38.6 – Maintenance
Lengthy offline application or database maintenance operations
can pose a significant problem in high-availability production
environments. The LogMiner for Rdb feature can help reduce the
length of downtime to a matter of minutes.
If a back up of the database is used for maintenance operations,
the application can continue to be modified during lengthy
maintenance operations. Once the maintenance is complete,
the application can be shut down, the production system .aij
file or files can be backed up, and LogMiner for Rdb can be
used to extract changes made to production tables since the
database was backed up. These changes can then be applied (using
an application program or the trigger technique previously
described) to the new database. Once the new database has been
updated, the application can be restarted using the new database.
The sequence of events required would be similar to the
following:
1. Perform a full online, quiet-point database backup of the
production database.
2. Restore the backup to create a new database that will
eventually become the production database.
3. Perform maintenance operations on the new database. (Note that
the production system continues to run.)
4. Perform an online, quiet-point after-image journal backup of
the production database.
5. Use the RMU Unload After_Journal command to unload all
database tables into individual output files from the .aij
backup file.
6. Using either the trigger technique or an application program,
update the tables in the new database with the changed data.
7. Shut down the production application and close the database.
8. Perform an offline, quiet-point after-image journal backup of
the production database.
9. Use the RMU Unload After_Journal command to unload all
database tables into individual output files from the .aij
backup file.
10. Using either the trigger technique or an application program,
update the tables in the new database with the changed data.
11. Start an online, quiet-point backup of the new database.
12. Change logical names or the environment to specify the new
database root file as the production database.
13. Restart the application on the new database.
Depending on the amount of application database activity, steps
4, 5, and 6 can be repeated to limit the amount of data that
needs to be applied (and the amount of downtime required) during
the final after-image journal backup and apply stage in steps 8,
9, and 10.
38.7 – OpenVMS Pipe
You can use an OpenVMS pipe to pass data from the RMU Unload
After_Journal command to another application (for example,
RMU Load). Do not use any options (such as the Log or Verify
qualifiers) that could cause LogMiner to send extra output to the
SYS$OUTPUT device, as that information would be part of the input
data source stream to the next pipeline segment.
You may find that the OpenVMS default size of the pipe is too
small if the records being extracted (including LogMiner fields)
are larger than 256 bytes. If the pipe is too small, increase
the SYSGEN parameters MAXBUF and DEFMBXMXMSG, and then reboot the
system.
The following example uses LogMiner for Rdb to direct output
to an OpenVMS pipe device and uses RMU Load to read the pipe
device as the input data record stream. Using the pipeline allows
parallel processing and also avoids the need for an intermediate
disk file. Note that you must have created the record definition
(.rrd) file prior to executing the command.
$ PIPE (RMU /UNLOAD /AFTER_JOURNAL OLTP.RDB AIJ1.AIJ -
/TABLE = (NAME = MYTBL, OUTPUT = SYS$OUTPUT:)) -
| (RMU /LOAD REPORTS.RDB MYLOGTBL SYS$PIPE: -
/RECORD_DEFINITION = FILE = MYLOGTBL.RRD)
39 – Hot Standby
Oracle Corporation offers an optional Hot Standby software
solution that you can use to implement a standby database for
mission-critical and disaster recovery functions.
A standby database is a second running database that is created
from and transactionally consistent with the primary or master
database. Data modifications that are made to the master database
are made simultaneously to the secondary database. The secondary
database is sometimes referred to as a hot standby database
because it is available immediately to pick up application
processing if the primary database system fails.
The Hot Standby software prevents your Oracle Rdb database or
Oracle CODASYL DBMS database from becoming a single point of
failure by replicating the master database to a standby database.
The Hot Standby software automatically performs coordinated
database synchronization and verification with high performance
and minimal impact on the master system resources.
39.1 – Replicate After Journal Commands
This topic provides the syntax and semantics for the following
Replicate commands and their parameters and qualifiers:
o Replicate After_Journal Configure
o Replicate After_Journal Reopen_Output
o Replicate After_Journal Start
o Replicate After_Journal Stop
These commands are available using either Oracle RMU (the Oracle
Rdb database management utility) or DBO, the Oracle CODASYL DBMS
Database Operator utility. This Help utility describes the Oracle
RMU command syntax only.
39.1.1 – Configure
Allows you to preconfigure many of the master and standby
database attributes (using qualifiers available with the
Replicate After_Journal Start command) without starting
replication operations.
You enter the Replicate After_Journal Configure command:
o On the master database to prespecify the Replicate After_
Journal Start command qualifiers that are valid for the
master database and store the qualifier settings in the master
database root file
o On the standby database to prespecify the Replicate After_
Journal Start command qualifiers that are valid for the
standby database and store the qualifier settings in the
standby database root file
Because the database attributes are stored in the respective
database root files, the settings do not take effect until you
start replication operations with the Replicate After_Journal
Start command.
39.1.1.1 – Description
The Replicate After_Journal Configure command is an optional
command you can use to preconfigure the master and standby
databases, one database at a time.
NOTE
You cannot preconfigure both the master and standby database
attributes in a single Replicate After_Journal Configure
command. Moreover, you cannot enter the Replicate After_
Journal Configure command on the standby database to
preconfigure master database attributes, or preconfigure
standby database attributes from the master database.
You can specify one or more of the following qualifiers when you
enter the Replicate After_Journal Configure command on the master
database:
Master Database
Qualifiers
Alt_Remote_Node (1)
Checkpoint
Connect_Timeout
[No]Log
[No]Quiet_Point
Reset
Standby_Root (2)
Synchronization
Transport
Footnote (1): You must also specify the Standby_Root qualifier.
Footnote (2): You must specify the Standby_Root qualifier the
first time you configure the master database.
The master database attributes that you specify are stored in the
master database root file. (You cannot specify the Wait, NoWait,
and Output qualifiers on the Replicate After_Journal Configure
command. You can specify these qualifiers when you invoke the
Replicate After_Journal Start command.)
You can specify one or more of the following qualifiers when
you enter the Replicate After_Journal Configure command on the
standby database:
Standby Database
Qualifiers
Buffers
Checkpoint
Gap_Timeout
Governor
[No]Log
Master_Root (1)
[No]Online
Reset
Footnote (1): You must specify the Master_Root qualifier the
first time you configure the standby database.
The standby database attributes that you specify are stored in
the standby database root file. (You cannot specify the Wait,
NoWait, and Output qualifiers on the Replicate After_Journal
Configure command. You can specify these qualifiers when you
invoke the Replicate After_Journal Start command.)
You should use the Replicate After_Journal Configure command if
you want to:
o Preset qualifier values that you typically specify on the
Replicate After_Journal Start command, but without starting
replication operations.
The values you specify become the new default qualifier values
that are stored in the database root file.
o Be able to quickly start replication operations by invoking
a single Replicate After_Journal Start command on the master
database.
If you use the Replicate After_Journal Configure command
to preconfigure the master and standby databases, you can
start replication for both databases by entering one Replicate
After_Journal Start command on the master database.
For example, if you have preconfigured both the master and
standby databases and then invoke the Replicate After_Journal
Start command on the master database node, the Hot Standby
software:
1. Starts replication operations on the master database using
default qualifier values from the master database root file
2. Creates the network connection to the standby database
3. Attaches the master and standby databases to the network
4. Starts replication operations on the standby database using
default qualifier values in the standby database root file
5. Synchronizes committed transactions on the master and standby
databases
NOTE
If you have not preconfigured database attributes using
the Replicate After_Journal Configure command, the Hot
Standby software uses either the system-supplied defaults
or the values that you specified on a previous Replicate
After_Journal Start command.
39.1.1.2 – Format
(B)0[m
[4mCommand[m [4mQualifiers[m x [4mDefaults[m x [4mUsage[m
x x
/Alt_Remote_Node=nodename x None x M
/Buffers=rollforward-buffer-count x /Buffers=256 x S
/Checkpoint=checkpoint-interval x /Checkpoint=100 x B
/Connect_Timeout=minutes x /Connect_Timeout=5 x M
/Gap_Timeout=minutes x /Gap_Timeout=5 x S
/Governor=[Enabled|Disabled] x /Governor=Enabled x S
/[No]Log x /Nolog x B
/Master_Root=master-root-file-spec x None x S
/[No]Online x /Noonline x S
/[No]Quiet_Point x /Noquiet_Point x M
/Reset x None x B
/Standby_Root=standby-root-file-spec x None x M
/Synchronization=[Commit|Hot|Warm|Cold]x /Synchronization=Coldx M
/Transport={DECnet|TCP/IP} x None x M
B=Both;M=Master;S=Standby
39.1.1.3 – Parameters
39.1.1.3.1 – database-rootfile
Specifies the name of the target database root file. For example,
if you want to preconfigure the master database attributes,
specify the master database root file. Similarly, you can specify
the standby database root file to preconfigure the standby
database.
NOTE
Do not include a node name when you specify the database-
rootfile parameter. This parameter must specify a locally
accessible database root file; the parameter cannot include
a remote file specification.
39.1.1.4 – Command Qualifiers
39.1.1.4.1 – Alt Remote Node
Identifies an available secondary link to the standby database.
Applicable to: Master database
Required or Optional: Optional
Default Value: None
The Alt_Remote_Node qualifier is used to provide the master
database with uninterrupted hot standby replication in case of
network failure where multiple network links are available. It
can only be used in conjunction with the Standby_Root qualifier,
which specifies the standby database node name.
The Alt_Remote_Node qualifier identifies the alternate remote
node name of the standby database. Following network failure,
the master database automatically attempts to reconnect to
the standby database using the alternate remote node name
information. If the Alt_Remote_Node qualifier is not specified,
the master database does not automatically attempt to reconnect
to the standby database using the orginal remote node name
specified using the Standby_Root qualifier.
The alternate node name can be the same as the node name
specified with the Standby_Root qualifier. The node name
specified by the Alt_Remote_Node qualifier must identify the
same standby database on the same remote node as originally
specified using the Standby_Root qualifier. The maximum length
of the alternate remote node name is 31 characters.
At run-time, the RDM$BIND_HOT_NETWORK_ALT_NODE logical name
can be defined in the LNM$SYSTEM_TABLE table to override any
alternate remote node name information specified at hot standby
startup. The logical must be specified on all nodes where the
master database is open.
39.1.1.4.2 – Buffers
Buffers=rollforward-buffer-count
Specifies the number of database buffers available to roll after-
image journals forward to the standby database.
Applicable to: Standby
database
Required or Optional
Optional:
Local Global Buffers
Buffers
Default Value: 4096 4096 or Global Buffer USER LIMIT,
whichever is smaller
Minimum Value: 2 2
Maximum Value: 1,048,576 1,048,576 or Gloal Buffer USER
LIMIT,
whichever is smaller
During replication operations, the LRS process on the standby
node receives after-image journal records from the master
database and rolls them forward to the standby database.
You can use the optional Buffers qualifier to override the
default number of database buffers.
For optimal performance, you should allocate a sufficient number
of buffers so that the server process can roll the after-image
journal records forward with a minimum number of I/O operations.
To estimate an appropriate number of buffers, use the following
equation as a starting point:
(Number of Modified Buffers per Transaction * Number of Users) + 20%
For example, if the average number of modified buffers per
transaction is 10 and there are 100 users on the database, then
the server process needs 1000 buffers at one time. To ensure that
you have an adequate number of buffers, add another 20 percent
(200 buffers) for a total of 1200 buffers.
The number of buffers can impact the time it takes for the LRS
process to checkpoint. When a checkpoint occurs, the LRS must
write all modified buffers to disk. For example, if the LRS
is using 2000 buffers, and it takes one second for 2000 disk
writes to complete, the LRS will be stalled for one second while
those writes are being done. This could cause the Hot Standby
governor to increase the synchronization mode if there is a lot
of update activity occurring while the LRS is checkpointing. For
some applications this could impose a practical limitation in the
number of buffers allocated to the LRS.
NOTE
The LRS process on the standby database does not use buffer
values defined by the following:
o DBM$BIND_BUFFERS logical name
o RDB_BIND_BUFFERS configuration parameter
o RDM$BIND_BUFFERS logical name
When replication operations are active, you can use the RMU or
DBO Show Users command to see the current number of database
buffers allocated. If replication operations are not active or
if you want to see the buffer value that was set on a previous
Replicate After_Journal Start command (stored in the database
root file), you can also use the Header and Dump_Select_Type=Hot_
Standby qualifiers on the RMU or DBO Dump command.
39.1.1.4.3 – Checkpoint
Checkpoint=checkpoint-interval
Specifies, in terms of processed messages, how frequently the
Hot Standby servers update information in the database root file.
This qualifier can be set to different values on the master and
standby databases.
Applicable to: Master and standby database
Required or Optional: Optional
Default Value: 100 messages
Minimum Value: 1 message
Maximum Value: 1024 messages
By default, the Hot Standby servers automatically perform
checkpoint operations on both the master and standby databases
after every 100 messages are processed. Checkpoints are essential
to database availability because they:
o Enable the Hot Standby software to restart database
replication operations more quickly in the event of a failure
because frequent checkpoints limit the number of transactions
that must be redone if a process or system fails.
o Cause all modified database cache buffers on the standby
database to be flushed to the disk, making the buffers
available for access by other users (when online database
access is enabled).
o Improve the redo performance of the database recovery (DBR)
process.
o Allow after-image backup operations to back up older after-
image journals on the master database.
NOTE
In addition to performing checkpoint operations specified
by the Checkpoint qualifier, the replication servers on
the master database also checkpoint automatically after
the following events:
o After two minutes of inactivity
o After a switchover to a new after-image journal (when
you are using circular after-image journals)
o After an AIJ backup operation (when you are using
extensible after-image journals)
On the standby database, the LRS process checkpoints
after two minutes of inactivity if data has been
processed since the last checkpoint.
These automatic checkpoints advance the oldest active
checkpoint indicator to make older after-image journals
available for backup operations. You cannot change or
override these checkpoint intervals.
The default checkpoint interval usually is sufficient to
effectively maintain synchronization between the master and
standby database root files. However, you can override the
default checkpoint interval by specifying the Checkpoint
qualifier when you start replication on the master database,
the standby database, or both.
For example, if you specify the qualifier Checkpoint=300 on the
standby database, the LRS server process updates information
in the standby database root file after every 300 messages
are exchanged between the master and the standby database. The
following table describes how the frequency of the checkpoint
operation can affect database synchronization.
Table 24 Setting the Frequency of Checkpoint Intervals
If you specify . . . Then . . .
A small checkpoint The Hot Standby software synchronizes the
interval database root files more often, but uses
less time to restart replication because
fewer transactions need to be redone.
A large checkpoint The Hot Standby software synchronizes the
interval database root files less frequently, but
requires more time to restart replication
because more transactions must be redone.
In addition, the value you set for the checkpoint interval:
o Controls replication restart in the event of a failure on the
master database. A side effect of this is that the ABS process
cannot back up after-image journals that are needed to restart
replication operations.
o Affects how the after-image journals on the master database
become available for backup.
Specifying a large value for the checkpoint interval can
cause after-image journal backup operations to stall until
the appropriate after-image journal file becomes available for
a backup operation. This is because the after-image journal
backup operation cannot back up any after-image journal file
that is required for process recovery or replication restart.
o Affects the reinitialization of after-image journals on the
standby database.
o Affects the manner in which the LRS process on the standby
database:
- Releases page locks
- Responds to page lock conflict messages from another
attached database process
Oracle Corporation recommends that you set a reasonably small
checkpoint interval for the standby database. Specifying a
checkpoint interval that is too large can prevent the LRS
process from responding to requests for pages, and it is
possible for other processes to become stalled.
For Oracle Rdb databases, you can monitor the effectiveness of
the current setting of the Checkpoint qualifier by using the RMU
Show Statistics command and examining the Checkpoint Information
display.
39.1.1.4.4 – Connect Timeout
Connect_Timeout=minutes
Specifies the maximum number of minutes that the LCS process on
the master database waits for a network connection to the LRS
process on the standby database.
Applicable to: Master database
Required or Optional: Optional
Default Value: 5 minutes
Minimum Value: 1 minute
Maximum Value: 4320 minutes (3 days)
When you start replication on the master database (before
starting it on the standby database):
1. The Hot Standby software invokes the log catch-up server (LCS)
process on the master database.
2. The LCS process invokes its corresponding network AIJSERVER
process on the standby node.
3. The AIJSERVER process attempts to create a network connection
to the LRS process on the standby node.
By default, the LCS process allows 5 minutes for the AIJSERVER
to connect to the LRS process. You can override the default
by specifying the Connect_Timeout qualifier when you start
replication on the master database. (Note that if you specify
the Connect_Timeout qualifier, you must specify a time value (in
minutes).)
The Connect_Timeout qualifier is useful when you start
replication operations on the master database before you
start replication on the standby database. This is because the
Connect_Timeout qualifier allows sufficient time for the network
connection to be made before the LCS process begins sending
after-image journal records across the network.
NOTE
While the LCS process on the master database waits for the
replication activity to begin on the standby database, users
and applications can continue to access and modify data in
the master database.
39.1.1.4.5 – Gap Timeout
Gap_Timeout=minutes
Specifies the maximum number of minutes that the standby database
(LRS process) should wait for a gap in the replication data
sequence to be resolved.
Applicable to: Standby database
Required or Optional: Optional
Default Value: 5 minutes
Minimum Value: 1 minute
Maximum Value: 4320 minutes (3 days)
39.1.1.4.6 – Governor
Governor=Enabled
Governor=Disabled
Enables or disables the replication governor.
Applicable to: Standby database
Required or Optional: Optional
Default Value: Governor=Enabled
The purpose of the replication governor is to coordinate database
replication operations automatically between the master and the
standby databases.
For more information see the Governor qualifier discussion under
the Replicate_After_Journal_Commands Start Help topic.
39.1.1.4.7 – Log
Log
Nolog
Indicates whether or not to log the status of, and information
about, activities when you start replication operations.
Applicable to: Master and standby database
Required or Optional: Optional
Default Value: Nolog
If you specify the Log qualifier, output showing the status
of the replication startup is logged to SYS$OUTPUT on OpenVMS
systems.
Oracle Corporation recommends that you specify the Log qualifier.
39.1.1.4.8 – Master Root
Master_Root=master-rootfile
Identifies the name of the master database root file from
which the replication servers on the standby database receive
replication data.
Applicable to: Standby database
Required or Required the first time you enter the
Optional: Replicate After_Journal Start command and
any time you specify other Replication
Startup qualifiers. Optional on all subsequent
invocations.
Default Value: None.
You must include the Master_Root qualifier the first time you
enter the Replicate After_Journal Start command (unless you have
preconfigured the Master_Root qualifier using the Replication
After_Journal Configure command). This ensures that the standby
database uses the master database you specify as the source of
the replication operations. If you omit the Master_Root qualifier
on subsequent Replicate After_Journal Start commands, the Hot
Standby software retrieves the master database name from the
header information in the database root file.
Whenever you specify the Master_Root qualifier, you must do the
following to ensure the command executes successfully:
o Specify the name of the master database root file.
Do not specify the name of the standby database on the Master_
Root qualifier. Any attempt to use a restored database as a
master database causes replication startup operations to fail.
o Include a node name and directory path for remote network
communications.
You can define a logical name to identify the master node.
o Be able to access the master database.
When the master database node is configured in a VMScluster
system, the node name you specify with the Master_Root
qualifier can be any participating node from which the master
database can be accessed. Cluster aliases are acceptable when
you use the Master_Root qualifier.
The master and standby databases communicate using network
communications (for remote database access) or interprocess
communications (for local database access) according to how you
specify the master database name. The following table describes
how the Hot Standby software chooses the method of communication:
If . . . Then . . .
You include a node The Hot Standby software uses remote
name when you specify network communications to receive the
the master database after-image journal log changes, unless
root file the specified node is the current node
You do not include The Hot Standby software uses local
a node name when you interprocess communications to receive
specify the master the after-image journal log changes
database root file
The Hot Standby software compares and verifies the master
database (that you specify with the Master_Root qualifier)
against the standby database (that you specify with the Standby_
Root qualifier when you start replication operations on the
master database). This verification ensures that both databases
are identical transactionally.
39.1.1.4.9 – Online
Online
Noonline
Allows or disallows users and applications to be on line
(actively attached) to the standby database.
Applicable to: Standby database
Required or Optional: Optional
Default Value: Noonline
Online database access means that database users and applications
can be actively attached (and perform read-only transactions)
to the standby database before, during, and after replication
operations.
The default setting (Noonline) disallows applications and
users from attaching to the standby database during replication
operations. However, if the standby database is open on another
node (thus, an ALS process is active on that node), the LRS
process cannot start replication on the standby database and
the error message STBYDBINUSE is returned.
NOTE
If record caching is enabled on the standby database, the
Hot Standby software assumes the Online setting. Specifying
the Noonline qualifier on the Replicate After_Journal Start
command has no effect. Because record caching requires
the record cache server to be an online server, you cannot
override the Online setting.
Because the Replicate After_Journal Start command fails if you
enter it on a standby node where read/write transactions are in
progress (including prestarted read/write transactions), Oracle
Corporation recommends that you choose the Noonline (default)
setting.
The Online and Noonline qualifiers do not affect access to the
master database.
39.1.1.4.10 – Quiet Point
Quiet_Point
Noquiet_Point
Determines whether or not the log catch-up server (LCS) process
acquires a transactional quiet point during the database
synchronization phase of the replication restart procedure.
Applicable to: Master database
Required or Optional: Optional
Default Value: Noquiet_Point
Oracle Corporation recommends using the Quiet_Point qualifier
because it makes it easier to restart replication operations.
39.1.1.4.11 – Reset
Resets previously configured information.
Applicable to: Master and standby database
Required or Optional: Optional
39.1.1.4.12 – Standby Root
Standby_Root=standby-rootfile
Identifies the name of the standby database root file to which
the replication servers on the master database send replication
data.
Applicable to: Master database
Required or Required the first time you enter the
Optional: Replicate After_Journal Start command and
any time you specify other Replication Startup
qualifiers. Optional on all other invocations.
Default Value: None
You must include the Standby_Root qualifier the first time you
enter the Replicate After_Journal Start command (unless you have
preconfigured the Standby_Root qualifier using the Replication
After_Journal Configure command). This ensures that the master
database communicates with the standby database you specify
as the recipient of replication operations. If you omit the
Standby_Root qualifier on subsequent Replicate After_Journal
Start commands, the Hot Standby software retrieves the standby
database name from the header information in the database root
file.
Whenever you specify the Standby_Root qualifier, you must do the
following to ensure the command executes successfully:
o Specify the name of the standby database root file.
o Include a node name and directory path for remote network
communications. (You can define a logical name to identify the
master node.)
NOTE
When the standby database is configured in a VMScluster
system, the node name you specify with the Standby_Root
qualifier cannot be a cluster alias.
o Be able to access the standby database.
o Ensure that the standby database is opened for access prior to
starting replication operations on the master database.
You must open the standby database manually unless you
preconfigured the standby database. If you preconfigured the
database, you can start replication on both the master and
standby databases by entering a single Replicate After_Journal
Start command on the master database. The master database
automatically opens the standby database, if necessary.
The master and standby databases communicate using network
communications (for remote database access) or interprocess
communications (for local database access) according to how you
specify the database name. The following table describes how the
Hot Standby software chooses the method of communication:
If . . . Then . . .
You specify a node The Hot Standby software uses remote
name (for access to a network communications to ship the after-
standby database on a image journal log changes, unless the
remote node) specified node is the current node
You do not specify a The Hot Standby software uses the
node name following communications to ship the
after-image journal log changes:
o Local interprocess communications on
the local node
o Remote network communications on all
other nodes and across the cluster
The Hot Standby software compares and verifies the master
database (that you specify with the Master_Root qualifier)
against the standby database (that you specify with the Standby_
Root qualifier). The purpose of this verification is to ensure
that both databases are identical transactionally.
39.1.1.4.13 – Synchronization
Synchronization=keyword
Specifies the degree to which you want to synchronize committed
transactions on the standby database with committed transactions
on the master database.
Applicable to: Master database
Required or Optional: Optional
Default Value: Synchronization=Cold
When you enable replication operations, server processes on the
master database write transactions to the after-image journal
for the master database and send them across the network to
the after-image journal for the standby database. The standby
database acknowledges the receipt of the transactional message
and handles after-image journaling depending on the mode you
have set with the Synchronization qualifier. The following table
describes the keywords you use to set the synchronization mode.
Table 25 Keywords for the Synchronization Qualifier
Performance
Impact
Equivalence on
of Committed Master Standby Database
Keyword Transactions Database Recoverability
Commit When the standby Highest The standby database is
database transactionally identical
receives the and recoverable with respect
AIJ information to the master database.
from the master
database, the
servers on the
standby database:
1. Write it to
the after-
image journal
on the standby
system
2. Apply the AIJ
to the standby
database
3. Send a message
back to
the master
database
acknowledging
the successful
commit of the
transaction
Hot When the standby High The standby database is
database extremely close to being
receives the transactionally identical to
AIJ information the master database.
from the master
database, the After-image journal records
servers on the in transit are received
standby database: and committed. Some restart
processing may be required
1. Write it to to synchronize the database.
the AIJ on the
standby system
2. Send a message
back to
the master
database
before
applying the
transaction
to the standby
database
Warm When the standby Medium The standby database is
database transactionally close to
receives the the master database, but the
AIJ information databases are not identical.
from the master
database, the There may be transactions
servers on the rolled back on the standby
standby database: database that have been
committed on the master
o Send a message database.
back to
the master
database
before
applying the
transaction to
either the AIJ
or the standby
database
o Might not
commit after-
image journal
records to the
database
Cold When the standby Low The standby database is
(de- database not immediately recoverable
fault) receives the transactionally with respect
AIJ information to the master database.
from the master
database: After-image journal records
in transit could be lost.
o The servers
never return
a message
acknowledging
the receipt
of the AIJ
information
o In failover
situations,
it is
possible that
transactions
rolled back
on the standby
database were
committed on
the master
database
For each level of database synchronization, you make a trade-off
between how closely the standby and master databases match each
other in regard to committed transactions against performance.
For example, the Synchronization=Cold level provides the fastest
performance for the master database, but the lowest level of
master and standby database synchronization. However, in some
business environments, this trade-off might be acceptable. In
such an environment, the speed of master database performance
outweighs the risk of losing recent transactions in the event of
failover; system throughput has greater financial importance and
impact than the value of individual aij records (transactions).
39.1.1.4.14 – Transport
Transport=DECnet
Transport=TCP/IP
Allows you to specify the network transport. The specified
transport, DECnet or TCP/IP, is saved in the database.
Applicable to: Master database
Required or Optional: Optional
The following example shows the use of this feature:
$ RMU/REPLICATE AFTER CONFIGUIRE /TRANSPORT=TCPIP -
_$ /STANDBY=REMNOD::DEV:[DIR]STANDBY_DB M_TESTDB
39.1.1.5 – Usage Notes
o The first time you configure the standby database, you must
include the Master_Root qualifier, and you must include the
Standby_Root qualifier the first time you configure the master
database.
You must preconfigure the Master_Root or Standby_Root
qualifiers because these qualifiers identify the "alternate"
database for the database being configured. These qualifiers
also identify whether a master or standby database is being
configured (if the Replicate After_Journal Configure command
includes the Master_Root qualifier, a standby database is
being configured). The Master_Root and Standby_Root qualifiers
are optional on subsequent replication configuration commands
because the value is stored in the database root file.
o You can include a node name with the Master_Root or Standby_
Root qualifiers.
o You cannot invoke the Replicate After_Journal Configure
command when replication operations are active.
o The RMU Backup command with the Continuous qualifier is not
supported when replication operations are active.
o You can override values you define with the Replicate After_
Journal Configure command (and other the default values stored
in the database root file) by specifying qualifiers on the
Replicate After_Journal Start command.
o You cannot specify the Output qualifier on the Replicate
After_Journal Configure command. Therefore, if you need to
record Hot Standby server information to an output file when
you start replication operations from the master database,
specify an output file by:
- Including the Output qualifier on the Replicate After_
Journal Start command
- Defining the BIND_ALS_OUTPUT_FILE, BIND_HOT_OUTPUT_FILE,
BIND_LCS_OUTPUT_FILE, or BIND_LRS_OUTPUT_FILE logical name
NOTE
If you plan to start replication operations remotely
(for example, to start replication on the standby
database from the master database node), you must
have GROUP, WORLD, and SYSPRV privileges on OpenVMS
systems.
39.1.1.6 – Examples
Example 1
The following example shows how to use the Replicate After_
Journal Configure command to configure replication attributes
for the master database:
$ RMU/REPLICATE AFTER_JOURNAL CONFIGURE mf_personnel -
/STANDBY_ROOT=REMNOD:::DISK1:[USER]standby_personnel -
/SYNCHRONIZATION=COLD -
/QUIET_POINT -
/CHECKPOINT=10 -
/CONNECT_TIMEOUT=1
39.1.2 – Reopen Output
Closes the current informational file and reopens it as a new
file. You can enter this command on either the master database
node (to reopen the output file that records LCS information) or
the standby database node (to reopen the output file that records
LRS information).
39.1.2.1 – Description
The Hot Standby software dynamically and transparently switches
from writing to the original output file to the new file. There
is no need to stop or interrupt database replication operations
during the transition to the new output file.
The Replicate After_Journal Reopen_Output command performs the
following steps to reopen the output file:
1. Closes the current output file in which information about
replication operations is recorded.
2. Reopens the output file by opening a new file using the
original output file name. On OpenVMS systems, the Hot
Standby software opens a new output file using the originally
specified file name and a new version number. Thus, you can
view the original output file by specifying the older version
number. If disk space is a problem, relocate the old output
file to another disk.
You can enter the Replicate After_Journal Reopen_Output command
on either the master or standby node as follows:
Enter the
command . . . To reopen the output file for the . . .
On the master LCS server on the master database
database node
On the standby LRS server on the standby database
database node
You must explicitly enable the ability to write replication
startup information to an output file by including the Output
qualifier when you start replication operations (see the
Replicate_After_Journal_Commands Start command for more
information), or by specifying the BIND_ALS_OUTPUT_FILE, BIND_
HOT_OUTPUT_FILE, BIND_LCS_OUTPUT_FILE, or BIND_LRS_OUTPUT_FILE
logical name.
The Replicate After_Journal Reopen_Output command is useful when:
o The output file becomes too large
For example, as the output file grows over time, you might run
out of disk space or notice that the database performance is
slow. You can use the Replicate After_Journal Reopen_Output
command to free up space on the disk. Once the new output
file is open, you should relocate the old output file to a new
location or delete the file.
If the disk that contains the output file becomes full, the
Hot Standby software stops writing information to the file
(and on OpenVMS systems, a message is sent to the system
operator). Note that replication operations continue, even
when write I/O to the output file stops.
o You want to view the currently open output file
By using the Replicate After_Journal Reopen_Output command,
you can capture a snapshot of the output file and examine
replication operations without interrupting processing. You
can also view the contents of the current output file using
the Type command at the OpenVMS system prompt.
NOTE
You cannot use the Replicate After_Journal Reopen_Output
command to change the size or location of the output
file; the command is intended to create a new version of
an existing output file.
o You want to open an output file for a server process that is
actively performing replication operations
Defining a logical name is useful if you omitted the Output
qualifier when you entered the Replicate After_Journal Start
command to start replication. You can define a logical name
to specify an output file while replication operations are
active. This can be done by defining the appropriate logical
name, and then invoking the Replicate After_Journal Reopen_
Output command. This allows you to create an output file so
the server can start writing to the file. The advantage to
defining a logical name is that you do not need to stop and
restart the server.
Reference: See the Output qualifier discussion under the
Replicate_After_Journal_Commands Start Help topic.
39.1.2.2 – Format
(B)0[mRMU/Replicate After_Journal Reopen_Output database-rootfile
39.1.2.3 – Parameters
39.1.2.3.1 – database-rootfile
Specifies the name of the master or standby database root file.
39.1.2.4 – Usage Notes
o To write replication information to an output file, specify
the Log and Output qualifiers on the Replicate After_Journal
Start command.
If you enter the Replicate After_Journal Reopen_Output command
on a node where logging is not enabled, the Hot Standby
software ignores the command; it does not return an error
message if the Replicate After_Journal Reopen_Output command
does not find an output file.
o The Replicate After_Journal Reopen_Output command is
applicable only to the files that record activities for the
LCS process or the LRS process. To reopen or view the output
file that records information about the ALS process, use the
RMU Server After_Journal Reopen_Output command.
Reference: For more information about displaying ALS
information, refer to the Oracle RMU Reference Manual.
39.1.2.5 – Examples
Example 1
The following command example shows how to reopen an output file:
$ RMU /REPLICATE AFTER_JOURNAL REOPEN_OUTPUT mf_personnel.rdb
39.1.3 – Start
Initiates database replication operations.
39.1.3.1 – Description
To start database replication, you can enter the Replicate After_
Journal Start command on both the standby node and the master
node. Although you can initiate replication operations on either
node, Oracle Corporation recommends that you start replication on
the standby node before you start it on the master node. This is
because replication activity does not begin until:
o The standby database creates the network connection
o The master database attaches to the network connection
o The master and standby databases are synchronized with regard
to committed transactions
NOTE
If you used the Replicate After_Journal Configure command
to preconfigure the master and standby database attributes
(see the Replicate_After_Journal_Commands Configure Help
topic), you can invoke a single Replicate After_Journal
Start command to start replication operations on both the
master and standby databases.
39.1.3.2 – Format
(B)0[mRMU/Replicate After_Journal Start database-rootfile
[4mCommand[m [4mQualifiers[m x [4mDefaults[m x[4mUsage[m
x x
/Alt_Remote_Node=nodename x None x M
/Checkpoint=checkpoint-interval x /Checkpoint=100 x B
/[No]Log x /Nolog x B
/Output=[log-filename|log-filename_PID]x None x B
/[No]Wait x /Wait x B
/Connect_Timeout=minutes x /Connect_Timeout=5 x M
/[No]Quiet_Point x /Noquiet_Point x M
/Standby_Root=standby-root-file-spec x None x M
/Synchronization=[Commit|Hot|Warm|Cold]x /Synchronization=Col x M
/Buffers=rollforward-buffer-count x /Buffers=256 x S
/Gap_Timeout=minutes x /Gap_Timeout=5 x S
/Governor=[Enabled|Disabled] x /Governor=Enabled x S
/Master_Root=master-root-file-spec x None x S
/[No]Online x /Noonline x S
/Transport={DECnet|TCP/IP} x None x M
B=Both; M=Master; S=Standby
39.1.3.3 – Starting Replication
You can start database replication while the master database,
the standby database, or both databases are on line (open) and
accessible for active use. There is no need to close either
database to initiate database replication.
Applications and users can continue to access data and make
modifications to the master database whether or not replication
activity has started. Waiting for the replication activity to
begin does not inhibit access to, or interrupt modifications on,
the master database.
Starting replication is an online operation that can occur while
the standby database is open. However, database users must
not actively attach to the standby database prior to starting
database replication if you perform offline backup operations.
Replication operations cannot start when these conditions exist:
o Any read/write transactions, including prestarted read/write
transactions, are active on the standby database
o Any read-only (snapshot) transactions are running on the
standby database. The Log Rollforward Server waits until the
read-only transactions commit.
o The row cache feature cannot be active on the standby
database.
The row cache feature must be identically configured on the
master and standby databases in the event failover occurs,
but the row cache feature must not be activated on the standby
database until it becomes the master.
To open the hot standby database prior to starting
replication, use the NoRow_Cache qualifier on the RMU Open
command to disable the row cache feature.
o Any storage area is inconsistent (for example, if you restore
a storage area from a backup file but you have not rolled
forward after-image journals to be consistent with the rest of
the database)
NOTE
On OpenVMS systems, if you have preconfigured your Hot
Standby environment using the Replicate After_Journal
Configure command and you plan to start replication
operations remotely (for example, if you want to start
replication on the standby database from the master
database node), you must provide the SYSPRV privilege to
the DBMAIJSERVER or RDMAIJSERVER account.
39.1.3.4 – Qualifier Usage
Some of the qualifiers for the Replicate After_Journal Start
command are applicable only when you start replication operations
on the master database node, while others are applicable only to
the standby database node. The following table categorizes the
qualifiers according to usage:
Table 26 Qualifier Usage for the Replicate After_Journal Start
Command
Master Node Master and
Qualifiers Standby Nodes Standby Node Qualifiers
Alt_Remote_Node Checkpoint Buffers
Connect_Timeout [No]Log Gap_Timeout
[No]Quiet_Point [No]Wait Governor
Standby_Root Output Master_Root
Synchronization [No]Online
Transport
The Hot Standby software does not allow you to use qualifiers
that are not valid for the database where you enter the command.
Therefore, when you enter the Replicate After_Journal Start
command on the:
o Master node - you can specify any of the qualifiers listed in
the first and second columns of above table
o Standby node-you can specify any of the qualifiers listed in
the last two columns of the above table
If you use an inapplicable qualifier (for example, if you use
the Connect_Timeout qualifier when you start replication on the
standby node), the Hot Standby software returns an error message.
NOTE
Whenever you specify a qualifier on the Replicate After_
Journal Start command line, you must also include the
Master_Root or Standby_Root qualifier, as appropriate,
on the command line. For example, to change the value of
the Synchronization qualifier on a master database node,
you must specify both the Synchronization and Standby_Root
qualifiers.
39.1.3.5 – Parameters
39.1.3.5.1 – database-rootfile
Indicates the root file specification for either the master or
standby database where you want to start database replication.
NOTE
Do not include a node name when you specify the database-
rootfile parameter. This parameter must specify a locally
accessible database root file; the parameter cannot include
a remote file specification.
The following list describes which database root file to specify
depending on where you enter the command:
o When you enter the command, Replicate After_Journal Start
on the standby node, specify the database root file for the
Standby database.
o When you enter the command, Replicate After_Journal Start on
the master node, specify the database root file for the Master
database.
To ensure that the standby database accesses the correct master
database as the source of replication operations, include the
Master_Root qualifier on the command line. Similarly, to ensure
that the master database accesses the correct standby database
as the target of replication operations, include the Standby_Root
qualifier on the command line.
Reference: See the Master_Root and Standby_Root qualifiers
discussed in this Help topic.
39.1.3.6 – Command Qualifiers
39.1.3.6.1 – Alt Remote Node
Identifies an available secondary link to the standby database.
Applicable to: Master database
Required or Optional: Optional
Default Value: None
The Alt_Remote_Node qualifier is used to provide the master
database with uninterrupted hot standby replication in case of
network failure where multiple network links are available. It
can only be used in conjunction with the Standby_Root qualifier,
which specifies the standby database node name.
The Alt_Remote_Node qualifier identifies the alternate remote
node name of the standby database. Following network failure,
the master database automatically attempts to reconnect to
the standby database using the alternate remote node name
information. If the Alt_Remote_Node qualifier is not specified,
the master database does not automatically attempt to reconnect
to the standby database using the orginal remote node name
specified using the Standby_Root qualifier.
The alternate node name can be the same as the node name
specified with the Standby_Root qualifier. The node name
specified by the Alt_Remote_Node qualifier must identify the
same standby database on the same remote node as originally
specified using the Standby_Root qualifier. The maximum length
of the alternate remote node name is 31 characters.
At run-time, the RDM$BIND_HOT_NETWORK_ALT_NODE logical name
can be defined in the LNM$SYSTEM_TABLE table to override any
alternate remote node name information specified at hot standby
startup. The logical must be specified on all nodes where the
master database is open.
The RMU Replicate After_Journal Configure/Reset command clears
previously configured alternate remote node name information.
39.1.3.6.2 – Buffers
Buffers=rollforward-buffer-count
Specifies the number of database buffers available to roll after-
image journals forward to the standby database.
Applicable to: Standby
database
Required or Optional
Optional:
Local Global Buffers
Buffers
Default Value: 4096 4096 or Global Buffer USER LIMIT,
whichever is smaller
Minimum Value: 2 2
Maximum Value: 1,048,576 1048576 or Gloal Buffer USER
LIMIT,
whichever is smaller
During replication operations, the LRS process on the standby
node receives after-image journal records from the master
database and rolls them forward to the standby database.
You can use the optional Buffers qualifier to override the
default number of database buffers.
For optimal performance, you should allocate a sufficient number
of buffers so that the server process can roll the after-image
journal records forward with a minimum number of I/O operations.
To estimate an appropriate number of buffers, use the following
equation as a starting point:
(Number of Modified Buffers per Transaction * Number of Users) + 20%
For example, if the average number of modified buffers per
transaction is 10 and there are 100 users on the database, then
the server process needs 1000 buffers at one time. To ensure that
you have an adequate number of buffers, add another 20 percent
(200 buffers) for a total of 1200 buffers.
The number of buffers can impact the time it takes for the LRS
process to checkpoint. When a checkpoint occurs, the LRS must
write all modified buffers to disk. For example, if the LRS
is using 2000 buffers, and it takes one second for 2000 disk
writes to complete, the LRS will be stalled for one second while
those writes are being done. This could cause the Hot Standby
governor to increase the synchronization mode if there is a lot
of update activity occurring while the LRS is checkpointing. For
some applications this could impose a practical limitation in the
number of buffers allocated to the LRS.
NOTE
The LRS process on the standby database does not use buffer
values defined by the following:
o DBM$BIND_BUFFERS logical name
o RDB_BIND_BUFFERS configuration parameter
o RDM$BIND_BUFFERS logical name
When replication operations are active, you can use the RMU or
DBO Show Users command to see the current number of database
buffers allocated. If replication operations are not active or
if you want to see the buffer value that was set on a previous
Replicate After_Journal Start command (stored in the database
root file), you can also use the Header and Dump_Select_Type=Hot_
Standby qualifiers on the RMU or DBO Dump command.
39.1.3.6.3 – Checkpoint
Checkpoint=checkpoint-interval
Specifies, in terms of processed messages, how frequently the
Hot Standby servers update information in the database root file.
This qualifier can be set to different values on the master and
standby databases.
Applicable to: Master and standby database
Required or Optional: Optional
Default Value: 100 messages
Minimum Value: 1 message
Maximum Value: 1024 messages
By default, the Hot Standby servers automatically perform
checkpoint operations on both the master and standby databases
after every 100 messages are processed. Checkpoints are essential
to database availability because they:
o Enable the Hot Standby software to restart database
replication operations more quickly in the event of a failure
because frequent checkpoints limit the number of transactions
that must be redone if a process or system fails.
o Cause all modified database cache buffers on the standby
database to be flushed to the disk, making the buffers
available for access by other users (when online database
access is enabled)
o Improve the redo performance of the database recovery (DBR)
process
o Allow after-image backup operations to back up older after-
image journals on the master database
NOTE
In addition to performing checkpoint operations specified
by the Checkpoint qualifier, the replication servers on
the master database also checkpoint automatically after
the following events:
o After two minutes of inactivity
o After a switchover to a new after-image journal (when
you are using circular after-image journals)
o After an AIJ backup operation (when you are using
extensible after-image journals)
On the standby database, the LRS process checkpoints
after two minutes of inactivity if data has been
processed since the last checkpoint.
These automatic checkpoints advance the oldest active
checkpoint indicator to make older after-image journals
available for backup operations. You cannot change or
override these checkpoint intervals.
The default checkpoint interval usually is sufficient to
effectively maintain synchronization between the master and
standby database root files. However, you can override the
default checkpoint interval by specifying the Checkpoint
qualifier when you start replication on the master database,
the standby database, or both.
For example, if you specify the qualifier Checkpoint=300 on the
standby database, the LRS server process updates information
in the standby database root file after every 300 messages
are exchanged between the master and the standby database. The
following table describes how the frequency of the checkpoint
operation can affect database synchronization.
Table 27 Setting the Frequency of Checkpoint Intervals
If you specify . . . Then . . .
A small checkpoint The Hot Standby software synchronizes the
interval database root files more often, but uses
less time to restart replication because
fewer transactions need to be redone.
A large checkpoint The Hot Standby software synchronizes the
interval database root files less frequently, but
requires more time to restart replication
because more transactions must be redone.
In addition, the value you set for the checkpoint interval:
o Controls replication restart in the event of a failure on the
master database. A side effect of this is that the ABS process
cannot back up after-image journals that are needed to restart
replication operations
o Affects how the after-image journals on the master database
become available for backup
Specifying a large value for the checkpoint interval can
cause after-image journal backup operations to stall until
the appropriate after-image journal file becomes available for
a backup operation. This is because the after-image journal
backup operation cannot back up any after-image journal file
that is required for process recovery or replication restart.
o Affects the reinitialization of after-image journals on the
standby database
o Affects the manner in which the LRS process on the standby
database:
- Releases page locks
- Responds to page lock conflict messages from another
attached database process
Oracle Corporation recommends that you set a reasonably small
checkpoint interval for the standby database. Specifying a
checkpoint interval that is too large can prevent the LRS
process from responding to requests for pages, and it is
possible for other processes to become stalled.
For Oracle Rdb databases, you can monitor the effectiveness of
the current setting of the Checkpoint qualifier by using the RMU
Show Statistics command and examining the Checkpoint Information
display.
39.1.3.6.4 – Connect Timeout
Connect_Timeout=minutes
Specifies the maximum number of minutes that the LCS process on
the master database waits for a network connection to the LRS
process on the standby database.
Applicable to: Master database
Required or Optional: Optional
Default Value: 5 minutes
Minimum Value: 1 minute
Maximum Value: 4320 minutes (3 days)
When you start replication on the master database (before
starting it on the standby database):
1. The Hot Standby software invokes the log catch-up server (LCS)
process on the master database.
2. The LCS process invokes its corresponding network AIJSERVER
process on the standby node.
3. The AIJSERVER process attempts to create a network connection
to the LRS process on the standby node.
By default, the LCS process allows 5 minutes for the AIJSERVER
to connect to the LRS process. You can override the default
by specifying the Connect_Timeout qualifier when you start
replication on the master database. (Note that if you specify
the Connect_Timeout qualifier, you must specify a time value (in
minutes).)
The Connect_Timeout qualifier is useful when you start
replication operations on the master database before you
start replication on the standby database. This is because the
Connect_Timeout qualifier allows sufficient time for the network
connection to be made before the LCS process begins sending
after-image journal records across the network.
NOTE
While the LCS process on the master database waits for the
replication activity to begin on the standby database, users
and applications can continue to access and modify data in
the master database.
Also, because the Connect_Timeout qualifier waits only for the
network connection, you might consider using the Wait qualifier
in addition to the Connect_Timeout qualifier. The Wait qualifier
causes the Replicate After_Journal Start command to wait for the
server processes to be activated. See the Wait qualifier in this
Help topic for additional information.
39.1.3.6.5 – Gap Timeout
Gap_Timeout=minutes
Specifies the maximum number of minutes that the standby database
(LRS process) should wait for a gap in the replication data
sequence to be resolved.
Applicable to: Standby database
Required or Optional: Optional
Default Value: 5 minutes
Minimum Value: 1 minute
Maximum Value: 4320 minutes (3 days)
If a gap in the replication data sequence is not resolved in the
period of time allowed, the LRS process:
1. Assumes that the node sending the message has failed
2. Terminates replication operations immediately
You must restart replication operations manually to resolve the
situation.
39.1.3.6.6 – Governor
Governor=Enabled
Governor=Disabled
Enables or disables the replication governor.
Applicable to: Standby database
Required or Optional: Optional
Default Value: Governor=Enabled
The purpose of the replication governor is to coordinate database
replication operations automatically between the master and the
standby databases. With the replication governor enabled, you can
effectively ensure that:
o The master and standby databases do not get too far out of
synchronization with respect to each other
o The performance of the master database does not deviate
greatly from that of the standby database
o The peak-time database requirements are handled automatically
and dynamically by the Hot Standby software
The replication governor allows the ALS process on the master
database and the LRS process on the standby database to
automatically choose the synchronization mode that provides
the best performance and ensures database replication
synchronization.
To use the replication governor most effectively, ensure
the Governor qualifier is Enabled and include the
Synchronization=Cold qualifier when you start replication
operations on the standby database. (Also, see the
Synchronization qualifier discussed later in this Help topic.)
Oracle Corporation recommends that you set the Synchronization
qualifier to Cold mode. This setting is most effective because
of the way the LRS process monitors its replication workload from
the master database, as described in the following table:
If . . . Then . . .
The replication The LRS process automatically upgrades
workload increases at to a stronger synchronization mode. For
a rate that prevents example, if the Synchronization qualifier
the standby database was originally set to Cold mode, the LRS
from keeping up with would change the synchronization mode to
the master database Warm (or higher, as required).
The replication The LRS process automatically downgrades
workload shrinks or weakens the synchronization mode.
However, the synchronization mode is
never weaker than the mode (Commit, Hot,
Warm, Cold) that you specify with the
Synchronization qualifier.
Because the synchronization mode changes dynamically, the LRS
process transmits the current synchronization mode to the ALS
process (on the master database) at every checkpoint interval
(see the Checkpoint qualifier earlier in this Help topic). For
example, if the replication governor upgrades the synchronization
mode from Cold to Warm, the LRS process transmits the information
to the ALS process. Then, the ALS process uses the stronger mode
on all subsequent messages to the standby database. (Note that
the LRS process maintains a different synchronization mode for
each master database node.)
Use the RMU Show Statistics command on the master database to
monitor the dynamically changing synchronization mode required by
the actual work load, and compare that to the mode you specified
with the Synchronization qualifier.
Recommendation: Oracle Corporation recommends that you do not use
the Governor=Disabled setting until the replication performance
is well understood and constant. Severe performance deviations on
the master database could stall or stop the database replication
operations.
39.1.3.6.7 – Log
Log
Nolog
Indicates whether or not to log the status of, and information
about, activities when you start replication operations.
Applicable to: Master and standby database
Required or Optional: Optional
Default Value: Nolog
If you specify the Log qualifier, output showing the status
of the replication startup is logged to SYS$OUTPUT on OpenVMS
systems.
Oracle Corporation recommends that you specify the Log qualifier.
Also, you can record status information to an output file by
including the Output qualifier on the Replicate After_Journal
Start command.
Reference: See the Output qualifier discussed in this Help topic.
39.1.3.6.8 – Master Root
Master_Root=master-rootfile
Identifies the name of the master database root file from
which the replication servers on the standby database receive
replication data.
Applicable to: Standby database
Required or Required the first time you enter the
Optional: Replicate After_Journal Start command and
any time you specify other Replication
Startup qualifiers. Optional on all subsequent
invocations.
Default Value: None.
You must include the Master_Root qualifier the first time you
enter the Replicate After_Journal Start command (unless you have
preconfigured the Master_Root qualifier using the Replication
After_Journal Configure command). This ensures that the standby
database uses the master database you specify as the source of
the replication operations. If you omit the Master_Root qualifier
on subsequent Replicate After_Journal Start commands, the Hot
Standby software retrieves the master database name from the
header information in the database root file.
Whenever you specify the Master_Root qualifier, you must do the
following to ensure the command executes successfully:
o Specify the name of the master database root file.
Do not specify the name of the standby database on the Master_
Root qualifier. Any attempt to use a restored database as a
master database causes replication startup operations to fail.
o Include a node name and directory path for remote network
communications.
You can define a logical name to identify the master node.
o Be able to access the master database.
When the master database node is configured in a VMScluster
system, the node name you specify with the Master_Root
qualifier can be any participating node from which the master
database can be accessed. Cluster aliases are acceptable when
you use the Master_Root qualifier.
The master and standby databases communicate using network
communications (for remote database access) or interprocess
communications (for local database access) according to how you
specify the master database name. The following table describes
how the Hot Standby software chooses the method of communication:
If . . . Then . . .
You include a node The Hot Standby software uses remote
name when you specify network communications to receive the
the master database after-image journal log changes, unless
root file the specified node is the current node
You do not include The Hot Standby software uses local
a node name when you interprocess communications to receive
specify the master the after-image journal log changes
database root file
The Hot Standby software compares and verifies the master
database (that you specify with the Master_Root qualifier)
against the standby database (that you specify with the Standby_
Root qualifier when you start replication operations on the
master database). This verification ensures that both databases
are identical transactionally.
39.1.3.6.9 – Online
Online
Noonline
Allows or disallows users and applications to be on line
(actively attached) to the standby database.
Applicable to: Standby database
Required or Optional: Optional
Default Value: Noonline
Online database access means that database users and applications
can be actively attached (and perform read-only transactions)
to the standby database before, during, and after replication
operations.
The default setting (Noonline) disallows applications and
users from attaching to the standby database during replication
operations. However, if the standby database is open on another
node (thus, an ALS process is active on that node), the LRS
process cannot start replication on the standby database and
the error message STBYDBINUSE is returned.
NOTE
If record caching is enabled on the standby database, the
Hot Standby software assumes the Online setting. Specifying
the Noonline qualifier on the Replicate After_Journal Start
command has no effect. Because record caching requires
the record cache server to be an online server, you cannot
override the Online setting.
Because the Replicate After_Journal Start command fails if you
enter it on a standby node where read/write transactions are in
progress (including prestarted read/write transactions), Oracle
Corporation recommends that you choose the Noonline (default)
setting.
The Online and Noonline qualifiers do not affect access to the
master database.
39.1.3.6.10 – Output
Output=log-filename.out
Output=log-filename_pid.out
Identifies the name of the file where you want the Hot Standby
software to create an operational output file (log) for the LCS
or LRS process:
o Specify the Output qualifier on the master database to create
an output file and collect information about the LCS process.
o Specify the Output qualifier on the standby database to create
an output file and collect information about the LRS process.
o Optionally, include "_PID" or "_pid" when you specify the
output file name. This causes the software to create a unique
file name because it includes the process identification (PID)
number.
Applicable to: Master and standby databases
Required or Optional: Optional
Default Value: None. If you do not specify the
Output qualifier, the Hot Standby
software does not record LCS or LRS
process activities to an output file.
The Output qualifier overrides definitions you make with the
BIND_LCS_OUTPUT_FILE or BIND_LRS_OUTPUT_FILE logical name. If you
enable replication operations for multiple databases, there will
be multiple operational output files.
The purpose of the operational log is to record the transmittal
and receipt of network messages, and to provide administrative
and diagnostic information.
Note the following when you specify the Output qualifier:
o You must specify an output file name. When you include "_PID"
in the output file specification, the command creates a unique
file name that includes the process identification (PID).
This command line creates a unique file name, for example,
DISK1:[USER]LRS_25C02914.OUT.
o Do not include a node name designation when you specify the
output file name.
o The default location is the database root file directory. You
can optionally include a directory name when you specify a
file name.
o The directory containing the output files must be readable and
writable by all processes.
o The default file type is .out
o You can display the name of output files that you specify with
the Output qualifier using the RMU Show Users command (shown
in Example 7-1). Output file names are not displayed in Show
Users output for files specified with a logical name.
NOTE
All bugcheck dumps are written to a corresponding
bugcheck dump file. Bugcheck dumps are not written to
the Output operational log.
Although it is optional, Oracle Corporation recommends that
you use the Output qualifier, or a logical name, to collect
information about the LCS and LRS processes during replication.
You can also collect information about the ABS, ALS, DBR, and
AIJSERVER processes by defining a logical name. The following
table lists the logical names you can define to collect server
process information to an output file:
Logical Name Specifies an output file for the . . .
BIND_ABS_LOG_FILE ABS process
BIND_ALS_OUTPUT_FILE ALS process (1)
BIND_DBR_LOG_FILE DBR process
BIND_HOT_OUTPUT_FILE AIJSERVER process
BIND_LCS_OUTPUT_FILE LCS process
BIND_LRS_OUTPUT_FILE LRS process
Footnote (1):
You can also collect information about the ALS process to an
output file by including the Output qualifier on the RMU Server
After_Journal command. For more information about displaying ALS
information, refer to the Oracle RMU Reference Manual.
Defining a logical name is also useful if you omitted the Output
qualifier when you entered the Replicate After_Journal Start
command to start replication. You can define a logical name to
specify an output file while replication operations are active.
This can be done by defining the appropriate logical name, and
then invoking the Replicate After_Journal Reopen_Output command.
This allows you to create an output file so the server can start
writing to the file without you having to stop and start the
server.
39.1.3.6.11 – Quiet Point
Quiet_Point
Noquiet_Point
Determines whether or not the log catch-up server (LCS) process
acquires a transactional quiet point during the database
synchronization phase of the replication restart procedure.
Applicable to: Master database
Required or Optional: Optional
Default Value: Noquiet_Point
Oracle Corporation recommends using the Quiet_Point qualifier
because it makes it easier to restart replication operations.
39.1.3.6.12 – Standby Root
Standby_Root=standby-rootfile
Identifies the name of the standby database root file to which
the replication servers on the master database send replication
data.
Applicable to: Master database
Required or Required the first time you enter the
Optional: Replicate After_Journal Start command and
any time you specify other Replication Startup
qualifiers. Optional on all other invocations.
Default Value: None
You must include the Standby_Root qualifier the first time you
enter the Replicate After_Journal Start command (unless you have
preconfigured the Standby_Root qualifier using the Replication
After_Journal Configure command). This ensures that the master
database communicates with the standby database you specify
as the recipient of replication operations. If you omit the
Standby_Root qualifier on subsequent Replicate After_Journal
Start commands, the Hot Standby software retrieves the standby
database name from the header information in the database root
file.
Whenever you specify the Standby_Root qualifier, you must do the
following to ensure the command executes successfully:
o Specify the name of the standby database root file.
o Include a node name and directory path for remote network
communications. (You can define a logical name to identify the
master node.)
NOTE
When the standby database is configured in a VMScluster
system, the node name you specify with the Standby_Root
qualifier cannot be a cluster alias.
o Be able to access the standby database.
o Ensure that the standby database is opened for access prior to
starting replication operations on the master database.
You must open the standby database manually unless you
preconfigured the standby database. If you preconfigured the
database, you can start replication on both the master and
standby databases by entering a single Replicate After_Journal
Start command on the master database. The master database
automatically opens the standby database, if necessary.
The master and standby databases communicate using network
communications (for remote database access) or interprocess
communications (for local database access) according to how you
specify the database name. The following table describes how the
Hot Standby software chooses the method of communication:
If . . . Then . . .
You specify a node The Hot Standby software uses remote
name (for access to a network communications to ship the after-
standby database on a image journal log changes, unless the
remote node) specified node is the current node
You do not specify a The Hot Standby software uses the
node name following communications to ship the
after-image journal log changes:
o Local interprocess communications on
the local node
o Remote network communications on all
other nodes and across the cluster
The Hot Standby software compares and verifies the master
database (that you specify with the Master_Root qualifier)
against the standby database (that you specify with the Standby_
Root qualifier). The purpose of this verification is to ensure
that both databases are identical transactionally.
If replication operations are not started on the standby database
when you invoke the Replicate After_Journal Start command on
the master database, the Hot Standby software attempts to start
replication on the standby database using the default replication
attributes configured in the database root file before starting
replication on the master database.
39.1.3.6.13 – Synchronization
Synchronization=keyword
Specifies the degree to which you want to synchronize committed
transactions on the standby database with committed transactions
on the master database.
Applicable to: Master database
Required or Optional: Optional
Default Value: Synchronization=Cold
When you enable replication operations, server processes on the
master database write transactions to the after-image journal
for the master database and send them across the network to
the after-image journal for the standby database. The standby
database acknowledges the receipt of the transactional message
and handles after-image journaling depending on the mode you have
set with the Synchronization qualifier.
Table 28 Keywords for the Synchronization Qualifier
Performance
Impact
Equivalence on
of Committed Master Standby Database
Keyword Transactions Database Recoverability
Commit When the standby Highest The standby database is
database transactionally identical
receives the and recoverable with respect
AIJ information to the master database.
from the master
database, the
servers on the
standby database:
1. Write it to
the after-
image journal
on the standby
system
2. Apply the AIJ
to the standby
database
3. Send a message
back to
the master
database
acknowledging
the successful
commit of the
transaction
Hot When the standby High The standby database is
database extremely close to being
receives the transactionally identical to
AIJ information the master database.
from the master
database, the After-image journal records
servers on the in transit are received
standby database: and committed. Some restart
processing may be required
1. Write it to to synchronize the database.
the AIJ on the
standby system
2. Send a message
back to
the master
database
before
applying the
transaction
to the standby
database
Warm When the standby Medium The standby database is
database transactionally close to
receives the the master database, but the
AIJ information databases are not identical.
from the master
database, the There may be transactions
servers on the rolled back on the standby
standby database: database that have been
committed on the master
o Send a message database.
back to
the master
database
before
applying the
transaction to
either the AIJ
or the standby
database
o Might not
commit after-
image journal
records to the
database
Cold When the standby Low The standby database is
(de- database not immediately recoverable
fault) receives the transactionally with respect
AIJ information to the master database.
from the master
database: After-image journal records
in transit could be lost.
o The servers
never return
a message
acknowledging
the receipt
of the AIJ
information
o In failover
situations,
it is
possible that
transactions
rolled back
on the standby
database were
committed on
the master
database
For each level of database synchronization, you make a trade-off
between how closely the standby and master databases match each
other in regard to committed transactions against performance.
For example, the Synchronization=Cold level provides the fastest
performance for the master database, but the lowest level of
master and standby database synchronization. However, in some
business environments, this trade-off might be acceptable. In
such an environment, the speed of master database performance
outweighs the risk of losing recent transactions in the event of
failover; system throughput has greater financial importance and
impact than the value of individual aij records (transactions).
Recommendation: For high-performance applications, Oracle
Corporation recommends that you do not specify both the
Synchronization=Cold and the Governor=Disabled qualifiers when
you start replication on the standby system. This is because
the master database can possibly outperform the standby database
during updates. The replication governor should be enabled to
prevent the master and standby databases from getting too far out
of synchronization.
NOTE
You can define logical names to specify the synchronization
mode, or to enable or disable the replication governor.
39.1.3.6.14 – Transport
Transport=DECnet
Transport=TCP/IP
Allows you to specify the network transport. The specified
transport, DECnet or TCP/IP, is saved in the database.
Applicable to: Master database
Required or Optional: Optional
The following example shows the use of this feature:
$ RMU/REPLICATE AFTER CONFIGUIRE /TRANSPORT=TCPIP -
_$ /STANDBY=REMNOD::DEV:[DIR]STANDBY_DB M_TESTDB
39.1.3.6.15 – Wait
Wait
Nowait
Indicates whether or not the Replicate command should wait for
activation of the replication server processes before returning
control to the user.
Applicable to: Master and standby databases
Required or Optional: Optional
Default Value: Wait
The Wait qualifier has the following effects:
o On the master database node-replication waits for activation
of the server processes on the master database node
o On the standby database node-replication waits for activation
of the server processes on the standby database node
The following list describes the [No]Wait qualifier:
o Wait (default)
The Replicate command does not return to the user until the
respective server process has successfully initiated the
database replication operation. Replication waits indefinitely
for the activation of the server process, even though
activation might take substantial time. However, the server
process might not actually start the replication operation.
o Nowait
Control should be returned to the user as soon as the LCS or
LRS server process has been invoked by the database monitor.
You can use the Connect_Timeout qualifier with the Wait qualifier
to limit the amount of time replication waits for the server
process to become active.
NOTE
You must wait for commands that include the Nowait qualifier
to complete before you enter another command. This is
because if the first command fails before the subsequent
command executes, the second command might receive the
HOTCMDPEND error. For example:
$ RMU/REPLICATE AFTER_JOURNAL START/NOWAIT mf_personnel
$ RMU/REPLICATE AFTER_JOURNAL STOP/WAIT mf_personnel
If the first command to start replication fails, the startup
error might be returned to the waiting Replicate After_
Journal Stop command.
39.1.3.7 – Examples
Example 1
The following example shows Replicate After_Journal Start
commands that initiate database replication. The default
qualifier values are read from the database root file header
information.
$ RMU/REPLICATE AFTER_JOURNAL START mf_personnel -
/STANDBY_ROOT=REMNOD::DISK1:[USER]standby_personnel -
/SYNCHRONIZATION=COLD -
/QUIET -
/CHECKPOINT=10 -
/CONNECT_TIMEOUT=1 -
/LOG -
/WAIT -
/OUT=REMNOD::DISK1:[USER]lcs_pid.out
39.1.4 – Stop
Terminates database replication operations.
39.1.4.1 – Description
You can enter the command on either the master node or the
standby node.
When you enter the command on the master database, replication
is terminated immediately. Active transactions are handled
differently depending on whether you specify the Abort qualifier
or take the default (Noabort).
When you enter the command on the standby database, replication
is terminated after any pending after-image journal records are
completely rolled forward. Any active transactions on the standby
database are rolled back.
You can stop database replication while the master database,
the standby database, or both databases are on line (open) and
accessible for active use. There is no need to close either
database to stop database replication.
If the database is not manually opened on the node where you
entered the Replicate After_Journal Start command, you must enter
the Replicate After_Journal Stop command on the node where the
corresponding replication server is running, or first open the
database manually.
When replication operations stop, the Hot Standby software
automatically restarts the AIJ log server (ALS) processes on
the standby node.
39.1.4.2 – Format
(B)0[mRMU/Replicate After_Journal Stop database-rootfile
[4mCommand[m [4mQualifiers[m x [4mDefaults[m
x
/[No]Abort[={Forcex | Delprc}] x /Noabort
/[No]Log x /Nolog
/[No]Wait x /Wait
39.1.4.3 – Parameters
39.1.4.3.1 – database-rootfile
Specifies the database root file for which you want to stop
replication operations.
39.1.4.4 – Command Qualifiers
39.1.4.4.1 – Abort
Abort=Forcex
Abort=Delprc
Noabort
Indicates whether pending after-image journal information
is rolled forward on the standby database before database
replication operations are shut down. The following list
describes the qualifiers:
o Abort=Delprc
The Abort=Delprc qualifier closes the database, and recovery
unit journals (RUJ) are not recovered. The processes and any
subprocesses of all Oracle Rdb database users are deleted.
o Abort=Forcex
The Abort=Forcex option closes the database, and recovery unit
journals (RUJ) are recovered and removed.
o Abort
When the Abort qualifier is specified without a keyword,
database replication shuts down as quickly as possible.
Any after-image journal information waiting to be rolled
forward on the standby database is discarded, and all active
transactions on the standby database are rolled back.
o Noabort (default)
Database replication shuts down after all after-image journal
information waiting to be rolled forward on the standby
database is completed. Note that this type of shutdown could
still result in active transactions being rolled back on the
standby database.
39.1.4.4.2 – Log
Log
Nolog
Enables or disables logging the results of the Replicate After_
Journal Stop operation.
Applicable to: Master and standby databases
Required or Optional: Optional
Default Value: Nolog
If you specify the Log qualifier, the log file output is written
to SYS$OUTPUT on OpenVMS.
39.1.4.4.3 – Wait
Wait
Nowait
Indicates whether or not the Replicate command should wait for
activation of the replication server processes before returning
control to the user.
Applicable to: Master and standby databases
Required or Optional: Optional
Default Value: Wait
The Wait qualifier has the following effects:
o On the master database node-replication waits for deactivation
of the server processes on the master database node
o On the standby database node-replication waits for
deactivation of the server processes on the standby database
node
The following list describes the Wait and Nowait qualifiers:
o Wait (default)
The Replicate command does not return to the user until
the respective server process has successfully stopped the
database replication operation. Replication waits indefinitely
for the termination of the server process, even though
termination might take substantial time. However, the server
process might not actually stop the replication operation.
o Nowait
Control should be returned to the user as soon as the LCS or
LRS server process has been stopped by the database monitor.
NOTE
You must wait for commands that include the Nowait qualifier
to complete before you enter another command. This is
because if the first command fails before the subsequent
command executes, the second command might receive the
HOTCMDPEND error. For example:
$ RMU/REPLICATE AFTER_JOURNAL START/NOWAIT mf_personnel
$ RMU/REPLICATE AFTER_JOURNAL STOP/WAIT mf_personnel
If the first command to start replication fails, the startup
error might be returned to the waiting Replicate After_
Journal Stop command.
39.2 – Privileges
On OpenVMS systems, you must have the RMU$OPEN privilege in the
root file ACL for the database or the OpenVMS WORLD privilege to
use the Replicate commands.
39.3 – Server Names and Acronyms
The discussions in Replicate_Commands Help topic sometimes use
acronyms to refer to the Hot Standby servers. The following table
shows the server names and their acronyms, and the database where
each server runs:
Server Acronym Database
AIJ log server ALS Master
Log catch-up server LCS Master
Log rollforward server LRS Standby
AIJSERVER - Standby
39.4 – Default Command Qualifiers
The Hot Standby software supplies default values for most of the
master and standby database attributes and maintains them in the
database root file. Optionally, you can change one or more of the
database attributes using qualifiers on the Replicate commands.
When you specify a database attribute, the Hot Standby software
updates the database root file so that the database root file
always contains the most up-to-date qualifier values for the
database.
You can specify database attributes using qualifiers on either of
the following Replicate commands:
o Replicate After_Journal Configure-Preconfigures the master
and standby database attributes without starting replication
operations. This optional command allows you to preset
database attributes that do not take effect until the next
time you start replication operations using the Replicate
After_Journal Start command.
o Replicate After_Journal Start-Configures database attributes
at the same time you start replication for a database. If you
preconfigured your database previously using the Replicate
After_Journal Configure command, you can override the default
settings by including one or more qualifiers on the Replicate
After_Journal Start command.
Whenever you enter the Replicate After_Journal Start command,
the Hot Standby software initiates database replication using
the qualifier values specified on the Replicate After_Journal
Start command line. If you do not specify qualifier values on the
command line, the Hot Standby software uses values stored in the
database root file or the default value for the qualifier.
Therefore, you do not need to respecify the qualifier values
except to change a qualifier setting. For example, the following
command example shows the Replicate After_Journal Start command
the first time you enter it on the master database node:
$ RMU/REPLICATE AFTER_JOURNAL START mf_personnel -
/STANDBY_ROOT=REMNOD::DISK1:[USER]standby_personnel -
/SYNCHRONIZATION=HOT
The Hot Standby software saves the qualifier settings in the
database root file (in this case, the database attributes are
saved in the master database root file). The next time you start
replication operations, you could enter the command line without
the qualifier.
39.5 – Examples
The following are examples of the header information from Oracle
Rdb master and standby database root files:
Example 1 Header Information from the Master Database Root File
Hot Standby...
- Database is currently being replicated as "Master"
Standby database is "_DISK1:[USER]STANDBY_PERSONNEL.RDB;1"
Remote node name is "REMNOD"
Replication commenced on 5-AUG-1996 08:13:30.57
Synchronization obtained via quiet-point
Server checkpoint interval is 100 messages
Server connection-timeout interval is 5 minutes
Replication synchronization is "hot"
Example 2 Header Information from the Standby Database Root File
Hot Standby...
- Database is currently being replicated as "Standby"
Master database is "_DISK1:[USER]MF_PERSONNEL.RDB;1"
Remote node name is "ORANOD"
Replication commenced on 5-AUG-1996 08:13:23.91
Database replication is "online"
Server checkpoint interval is 100 messages
Server gap-timeout interval is 5 minutes
Server buffer count is 256
Server 2PC transaction resolution is "commit"
40 – RMU_ERRORS
40.1 – ABMBITERR
inconsistency between spam page <num> and bit <num> in area bitmap in larea <num> page <num> Explanation: An ABM (Area Bit Map) page may have a bit set for a SPAM (Space Management) page that does not manage the logical area described by that ABM, or the ABM page should have a bit set but does not. User Action: Use RMU REPAIR to rebuild the ABM pages.
40.2 – ABMBITSET
Bit <num> set in area bitmap for nonexistent SPAM page. Corrupt bit is for logical area <num> on page <num>. Explanation: An ABM page can contain more bits than there are spam pages in an area. This message indicates that an ABM bit was set for a non-existent SPAM page. The logical area and page number in the message identify the page that has the bad bit set. User Action: User RMU REPAIR to rebuild the ABM page.
40.3 – ABMFILMBZ
area bit map page <num> for logical area <num> contains a filler field that should be zero expected: 0, found: <num> Explanation: The area bit map pages contain some filler fields that are reserved for future use. These should contain all zeros. User Action: Correct the error with RMU Restore command and verify the database again.
40.4 – ABMFRELEN
area bit map page <num> for logical area <num> has an incorrect free space length expected: <num>, found: <num> Explanation: The free space count in the area bit map page contains a bad value. User Action: Correct the error with the RMU Restore command and verify the database again.
40.5 – ABMPCHAIN
larea <str> ABM page <num> points to page <num> Explanation: An error occurred during verification of the ABM chain for the logical area. This message gives one link in the ABM page chain before the error occurred. You may use the series of these error messages to reconstruct the entire ABM chain up to the point where the error occurred. User Action: Verify the page manually, and check if the database needs to be restored.
40.6 – ABMVFYPRU
ABM page chain verification pruned Explanation: An error occurred during verification of an ABM page, so verification will not proceed any further for this ABM chain. User Action: Verify the ABM page manually, and check if the database needs a restore.
40.7 – ABORT
operator requested abort on fatal error Explanation: Operation terminated by user's request. User Action:
40.8 – ABORTCNV
fatal error encountered; aborting conversion Explanation: Errors were encountered that prevent the RMU conversion process from proceeding. User Action: Before aborting, RMU Convert should have displayed one or more error message. Correct the errors as indicated by each error message's user action and rerun the RMU Convert command.
40.9 – ABORTVER
fatal error encountered; aborting verification Explanation: The database is corrupted so badly that no further verification is possible. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.10 – ABSACTIVE
AIJ backup active or backup operations suspended on this node Explanation: After-image journal backup operations have already been suspended from this node. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.11 – ABSNSUSPENDED
AIJ backup operations not suspended on this node Explanation: After-image journal backup operations have not been suspended from this node. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.12 – ABSSUSPENDED
AIJ backup operations already suspended on this node Explanation: After-image journal backup operations have already been suspended from this node. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.13 – ACCVIO
access violation on read/write of user address Explanation: A readable parameter is not readable by the DBCS or a writeable parameter is not writeable by the DBCS. User Action: Pass good parameters to the DBCS.
40.14 – ACKTIMEOUT
Network error: Timeout waiting for Executor acknowledgement. Explanation: The client sent a request to the executor but did not receive an acknowledgement within a specific interval. User Action: Determine that SQL/Services is running on the server system and then retry the operation. If the problem persists, contact your Oracle support representative for assistance.
40.15 – ACLBUFSML
There is no more space for more ACE entries. Explanation: The space allocated in the root for RMU's access control list is full and cannot be extended either because the database is a single-file database or because the database is open for other users. User Action: If the database is a multi-file database, execute the command when the database is not open. If the database is a single-file database, then the only possible changes are to delete existing access control entries and then add new ones.
40.16 – ACLEMPTY
Access control list is empty Explanation: The specified database root file has no ACL. User Action: Use the RMU Set Privilege command to create a root file ACL for the database.
40.17 – ACLFETCHFAIL
Internal error - ACL fetch fail Explanation: The RMU Extract command was unable to fetch the access control lists. User Action: Contact your Oracle support representative for assistance.
40.18 – ACTMISMATCH
journal is for database activation <time>, not <time> Explanation: The activation time and date stamp in the root does not match the activation time and date stamp in the journal file. This journal cannot be applied to this database. User Action: Use the correct journal file or backup file.
40.19 – AIJACTIVE
<num> active transaction(s) not yet committed or aborted Explanation: Upon completion of the roll-forward operations for the current AIJ file, more than 1 transaction remains active. That is, the commit or roll-back information resides in the next AIJ file to be processed. It is also possible that one or more of these active transactions are prepared transactions, which may be committed or aborted by the recovery operation using DECdtm information; in this case, a separate message indicating the number of prepared transactions will be displayed. User Action: No user action is required. This message is informational only.
40.20 – AIJALLDONE
after-image journal roll-forward operations completed Explanation: The after-image journal roll-forward operation has completed. User Action: No user action is required. This message is informational only.
40.21 – AIJAUTOREC
starting automatic after-image journal recovery Explanation: The /AUTOMATIC command qualifier was specified for the after-image roll-forward operation, and the roll-forward operation has detected that automatic journal recovery is possible. This message indicates that automatic recovery has begun. User Action: No user action is required.
40.22 – AIJBADAREA
inconsistent storage area <str> needs AIJ sequence number <num> Explanation: The indicated storage area has been marked inconsistent with the rest of the database. The AIJ file with the indicated sequence number is required to commence recovery of the area. If the sequence number of the AIJ file is different than the indicated sequence number, recovery of the area will not be performed. User Action: No user action is required. This message is informational only.
40.23 – AIJBADPAGE
inconsistent page <num> from storage area <str> needs AIJ sequence number <num> Explanation: The indicated page has been marked inconsistent with the rest of the database. The AIJ file with the indicated sequence number is required to commence recovery of the page. If the sequence number of the AIJ file is different than the indicated sequence number, recovery of the area will not be performed. User Action: No user action is required. This message is informational only.
40.24 – AIJBCKACT
AIJ modify operation not allowed; AIJ backup in progress Explanation: An AIJ backup is currently in progress. While an AIJ backup is in progress, AIJ modify operations (such as, disabling AIJ journaling or changing the default AIJ filename) are not permitted. If the AIJ backup was prematurely terminated by the user, another AIJ backup must complete before AIJ modifications are permitted. User Action: Allow the AIJ backup to finish before attempting the AIJ modify operation. If the AIJ backup was prematurely terminated by the user, start another AIJ backup and allow it to complete. The AIJ modify operation will then be possible.
40.25 – AIJBCKACTV
journal <str> backup (sequence <num>) already active Explanation: An AIJ backup is already active for the specified journal. In most cases, the previously active backup is being performed by the background AIJ backup server. This problem only occurs when using the "by-sequence" AIJ backup option, and normally when specifying only a single AIJ sequence number value (i.e. "/SEQUENCE=15"). User Action: Let the active backup finish before attempting to start another AIJ backup operation, or specify both a starting and ending AIJ sequence number (i.e. "/SEQUENCE=(15,15)").
40.26 – AIJBCKBADSEQ
invalid AIJ backup sequence numbers (<num> through <num>) Explanation: The specified AIJ backup sequence numbers incorrect. User Action: Specify the sequence numbers in ascending order.
40.27 – AIJBCKBEG
beginning after-image journal backup operation Explanation: This is an informational message to inform the user that the after-image backup operation has begun. User Action: No user action is required.
40.28 – AIJBCKCNFT
cannot specify a backup filename and use SAME AS JOURNAL option Explanation: An attempt was made to specify an after-image backup filename and use the BACKUP SAME AS JOURNAL option. User Action: Specify one or the other of the after-image backup options, but not both.
40.29 – AIJBCKCUR
cannot backup current AIJ journal if no other unmodified journals exist Explanation: An attempt was made to backup the "current" after-image journal, but no other unmodified after-image journals are available. This situation occurs when a "by-sequence" backup is performed in the wrong order; that is, the current after-image journal was backed up when a "modified" lower sequence after-image journal exists. User Action: Backup the lower-sequence after-image journal first.
40.30 – AIJBCKDIR
AIJ-backup filename "<str>" does not include device and directory Explanation: The AIJ-backup filename specified does not include a device and directory. User Action: For maximum protection, you should always include a device and directory in the AIJ-backup file specification, preferably one that is different from both the database device and AIJ device.
40.31 – AIJBCKDONE
AIJ backup completed when accessing unmodified journal <str> Explanation: An attempt was made to backup an after-image journal that has not been modified. This normally occurs when a "by-sequence" is done out of order (For instance, sequence 6 is backed up, then sequences 5 through 7 are attempted). User Action: In the above example, the backup was completed when the previously backed up AIJ sequence 6 was encountered; the journal containing sequence 5 was fully and safely backed up. Restart the backup with the next journal requiring backup (in the above example, sequence 7).
40.32 – AIJBCKDSBL
database contains no after-image journals that qualify for backup Explanation: An attempt was made to perform an after-image backup for a database that has after-image journaling disabled and does not have any journals that qualify to be backed up. This situation occurs if there are no after-image journals, or all journals are unmodified and do not require backup. User Action: No user action is required.
40.33 – AIJBCKEND
after-image journal backup operation completed successfully Explanation: This is an informational message to inform the user that the after-image backup operation has completed successfully. User Action: No user action is required.
40.34 – AIJBCKFAIL
the AIJ backup that created the AIJ file did not complete Explanation: It appears that the AIJ backup process, that created the AIJ file currently being recovered, failed or was prematurely terminated. When this situation occurs, it is possible that one or more transactions active at the time of the backup failure may not have been recovered completely. User Action: Roll forward the next AIJ file, which should contain the commit information for any transactions that were not completely recovered. If there are no more AIJ files to be rolled forward, then all transactions have been completely recovered.
40.35 – AIJBCKFIL
no after-image journal backup filename specified Explanation: An attempt was made to back up an after-image journal, but no backup file name was specified, and the journal did not contain a default backup-file name specification. User Action: Specify an after-image journal backup filename, or modify the journal to contain a default backup-file name specification.
40.36 – AIJBCKFIX
cannot perform by-sequence AIJ backup of extensible journals Explanation: An attempt has been made to back up an "extensible" after-image journal using the "by-sequence" command qualifier. User Action: Do NOT use the "by-sequence" command qualifier when backing up an extensible AIJ journal.
40.37 – AIJBCKGAP
AIJ backup completed after skipping previously backed up journal sequence <num> Explanation: An attempt was made to back up an after-image journal that does not have the next chronological sequence number. This condition normally occurs when a "by-sequence" operation is done out of order. For instance, sequence 6 is backed up, then sequences 5 through 7 are attempted. User Action: In the above example, the backup was completed when the previously backed up AIJ sequence 6 was encountered. The journal containing sequence 5 was fully and safely backed up. Restart the backup with the next journal requiring back up (in the above example, sequence 7).
40.38 – AIJBCKHARD
after-image journals cannot be backed up due to unrecoverable data loss Explanation: An attempt was made to back up an after-image journal after loss of AIJ data has occurred. One or more of the following events may have occurred: 1. An inaccessible journal was deleted. 2. A modified journal was deleted while journalling was disabled. 3. A journal was overwritten. 4. Journal switch-over failed. User Action: A full database backup must be immediately performed to make the database recoverable again.
40.39 – AIJBCKINAC
AIJ backup completed when accessing inaccessible journal <str> Explanation: An attempt was made to backup an after-image journal that is not currently accessible. User Action: The specified after-image journal must be deleted or unsuppressed before the backup will be allowed to proceed.
40.40 – AIJBCKINTR
invalid after-image journal backup interval value "<num>" specified Explanation: An invalid AIJ journal backup interval was specified. User Action: The AIJ journal backup interval specifies the number of seconds for which the backup utility will wait. The value must be a positive number, which may include the value "0".
40.41 – AIJBCKMOD
cannot modify AIJ information while backup is active or suspended Explanation: An attempt was made to modify after-image journal information while an AIJ backup was in progress. User Action: Wait until the AIJ backup completes.
40.42 – AIJBCKOVR
AIJ backup not possible when modified journals have been overwritten Explanation: An attempt was made to perform an after-image backup when one or more of the active AIJ journals have been overwritten. Backing up an AIJ journal that has been overwritten is not possible, because AIJ data was lost when the journal was overwritten, making the database non-recoverable. The resulting AIJ backup file could not be used for subsequent AIJ roll-forward operations. User Action: Perform a full database backup. Once the full database backup has been completed, after-image journal backup operations will again be possible.
40.43 – AIJBCKOVRW
AIJ backup completed when accessing overwritten journal <str> Explanation: An attempt was made to back up an after-image journal that has been overwritten. User Action: While an after-image journal was in progress, the journal being backed up was overwritten. Consequently, data-loss has occurred, and the backup operation cannot continue any further. A full database backup is required.
40.44 – AIJBCKRENAME
/RENAME qualifier invalid when backup filespec also specified Explanation: The /RENAME qualifier cannot be specified when an AIJ backup filename specification is also specified, since these are conflicting options. User Action: Specify either the /RENAME qualifier (using "" for the AIJ backup filename specification) or the AIJ backup filename specification, but not both.
40.45 – AIJBCKSEQ
backing up after-image journal sequence number <num> Explanation: The created after-image backup file will be internally identified with the indicated sequence number. When AIJ files are rolled forward, the roll-forward utility will prompt for specific AIJ sequence numbers. The AIJ file sequence number should be included as a component of any external file identification information, such as magtape labels. User Action: No user action is required. This message is informational only.
40.46 – AIJBCKSTOP
backup of after-image journal <str> did not complete Explanation: The AIJ backup operation of the identified journal did not complete, typically because of some previous backup failure condition. User Action: Restart the AIJ backup operation after correcting the identified problems.
40.47 – AIJBCKSWTCH
journal <str> is busy and AIJ switch-over suspended - add new journal Explanation: The AIJ switch-over operation is suspended and performing the requested AIJ backup operation cannot proceed because active processes require the specified AIJ journal for recovery reasons. User Action: It is necessary to add a new journal before performing the AIJ backup operation.
40.48 – AIJBCKTHRS
invalid after-image journal backup threshold value "<num>" specified Explanation: An invalid AIJ journal backup threshold was specified. User Action: The AIJ journal backup threshold specifies the approximate limit on the size of the journal. The value must be a positive number, which may include the value "0".
40.49 – AIJCCHDIR
AIJ-cache file name "<str>" does not include device and directory Explanation: The AIJ-cache filename specified does not include a device and directory. User Action: For maximum protection, you should always include a device and directory in the AIJ-cache file specification, preferably one that is different from both the database device and AIJ device.
40.50 – AIJCONFIRM
Do you wish to continue the roll-forward operation of this journal [<char>]: Explanation: Continue or terminate the AIJ roll-forward operation with the current journal file. User Action: Enter 'YES' to continue the roll-forward operation of the journal. Enter 'NO' to terminate the roll-forward operation of the journal. Any response other then 'YES' will also result in the termination of the roll-forward operation.
40.51 – AIJCORRUPT
journal entry <num>/<num> contains <num>!1%Can AIJBUF with an invalid length!2%Can AIJBL with an invalid length!3%Cthe start of a new AIJBL before previous AIJBL is complete!4%Ca new AIJBL that doesn't have the start flag set!%E**!%F Explanation: The journal contains corruption at the location indicated (record number / block number). User Action: Contact your Oracle support representative for assistance.
40.52 – AIJCORUPT
The entry for AIJ <str> is marked as being corrupt. Explanation: The specified AIJ file is corrupt. User Action: Remove the journal from the set of journals.
40.53 – AIJCURSEQ
specified after-image journal contains sequence number <num> Explanation: The specified after-image journal contains the indicated sequence number. This sequence number must exactly match that expected by the roll-forward utility. User Action: No user action is required. This message is informational only.
40.54 – AIJDATLOS
The AIJ file has lost some data. Explanation: An attempt to write data to the AIJ file failed at some time. User Action: Verify that you have a full and complete backup of the database that postdates this journal file. That backup is required to insure recoverability of the database.
40.55 – AIJDELCUR
cannot remove the current AIJ journal "<str>" Explanation: An attempt was made to remove the AIJ journal currently in use. User Action: Disable AIJ journaling first, or try to remove the AIJ journal when the journal is no longer in use.
40.56 – AIJDELMOD
cannot remove AIJ journal "<str>" until backed up Explanation: An attempt was made to remove an AIJ journal that has not yet been backed up. User Action: Disable AIJ journaling first, or backup the AIJ journal.
40.57 – AIJDEVDIR
AIJ filename "<str>" does not include a device/directory Explanation: The after-image journal file name specified does not include a device and directory. User Action: For maximum protection, you should always include a device and directory in the file specification, preferably one that is different from the database device.
40.58 – AIJDISABLED
after-image journaling must be enabled for this operation Explanation: You attempted to perform an after-image journal operation, such as a backup of the journal file, for a database that has after-image journaling disabled. User Action: Enable after-image journaling for your database, and try the backup again at some later time.
40.59 – AIJDSBLCUR
cannot manually suppress the current AIJ journal "<str>" Explanation: An attempt was made to manually suppress the AIJ journal currently in use. User Action: Disable AIJ journaling first, or try to unsuppress the AIJ journal when the journal is no longer in use.
40.60 – AIJDSBLMOD
cannot manually suppress AIJ journal "<str>" until backed up Explanation: An attempt was made to manually suppress an AIJ journal that has not yet been backed up. User Action: Disable AIJ journaling first, or backup the AIJ journal.
40.61 – AIJDUPSVRNAM
duplicate "Hot Standby" server name Explanation: The specified "Hot Standby" server name is a duplicate of an existing server name on this node. User Action: Specify another server name.
40.62 – AIJEMPTY
AIJ file <str> is empty. Explanation: Every active AIJ file must have an OPEN record as the first record in the file. The AIJ file named in the message has no data in it. User Action: Backup the database, and recreate the AIJ files.
40.63 – AIJENABLED
after-image journaling must be disabled Explanation: You attempted to perform an operation that requires after-image journaling to be disabled, but the database still has after-image journaling enabled. User Action: Disable after-image journaling for your database and try the operation again. After the operation has completed, you can enable after-image journaling again.
40.64 – AIJENBOVR
enabling AIJ journaling would overwrite an existing journal Explanation: Enabling after-image journaling would result in an existing AIJ journal being overwritte, which would result in the loss of AIJ data, making the database non-recoverable. User Action: Modify the database to allow after-image journals to be overwritten, or add a new AIJ journal.
40.65 – AIJFILEGONE
continuing with AIJ modification operation Explanation: When an attempt was made to disable AIJ journaling or to change the default AIJ filename, the active AIJ file could not be opened. This condition typically occurs only for catastrophic reasons; therefore, the AIJ file is assumed to have contained some data records, which are presumed to have been lost. User Action: No user action is required. This message is informational only.
40.66 – AIJFILERS
<num> error(s) validating after-image journal Explanation: Errors were found during validation of an after-image journal file. User Action: Backup your database and create a new after-image journal file.
40.67 – AIJFNLSEQ
to start another AIJ file recovery, the sequence number needed will be <num> Explanation: This message informs the user what the next AIJ file sequence number will be. AIJ file sequence numbers are modified for a variety of reasons (such as, performing an AIJ backup, enabling or disabling AIJ logging, etc.). User Action: No user action is required. This message is informational only.
40.68 – AIJFULL
The AIJ files are full Explanation: An attempt to write data to the AIJ file failed because the AIJ files are full. User Action: Add new AIJ files if you have some empty journal file slots available or perform a noquiet-point AIJ backup to free some AIJ space.
40.69 – AIJGOODAREA
storage area <str> is now consistent Explanation: The indicated storage area has been marked consistent with the rest of the database. User Action: No user action is required. This message is informational only.
40.70 – AIJGOODPAGE
page <num> from storage area <str> is now consistent Explanation: The indicated page has been marked consistent with the rest of the database. User Action: No user action is required. This message is informational only.
40.71 – AIJHRDENB
cannot unsuppress an AIJ journal that has hard data loss Explanation: An attempt was made to unsuppress an AIJ journal that experienced hard data loss. This is not permitted because it would possibly leave the database in a non-recoverable state. User Action: The AIJ journal must be removed.
40.72 – AIJISOFF
after-image journaling has been disabled Explanation: After-image journaling has been disabled. The database is no longer recoverable. It is highly recommended that after-image journaling be re-enabled as soon as possible. User Action: No user action is required.
40.73 – AIJISON
after-image journaling has been enabled Explanation: After-image journaling has been enabled. All subsequent database operations will be journaled to the "current" journal. User Action: A full database backup should be performed.
40.74 – AIJJRNBSY
journal <str> is busy and cannot be backed up Explanation: An attempt has been made to back up an after-image journal that is currently required for process recovery. The journal is considered to be "busy" until no process requires the journal for recovery. User Action: Use the /WAIT command qualifier to indicate that the after-image backup is to "wait" for the journal to become available; that is, the journal becomes available for backup when no more processes require it for recovery.
40.75 – AIJLSSDONE
"Hot Standby" has been shutdown Explanation: "Hot Standby" has been terminated. User Action: Restart the database replication operation.
40.76 – AIJMINSZ
allocation size is <num> blocks due to suspended AIJ switch-over or active replication Explanation: The specified AIJ journal file allocation size was overwritten with the optimal size indicated. This action was taken to meet the requirements of the suspended AIJ switch-over condition. User Action: None.
40.77 – AIJMODOBS
cannot use deprecated modification syntax with new AIJ features Explanation: An attempt was made to modify an AIJ journal using deprecated syntax in a database environment where advanced AIJ journaling features are in use. User Action: The enhanced AIJ journal modification syntax must be used in an environment where advanced AIJ journaling features, such as multiple AIJ journals, are in use.
40.78 – AIJMODSEQ
next AIJ file sequence number will be <num> Explanation: This message informs the user what the next AIJ file sequence number will be. AIJ file sequence numbers are modified for a variety of reasons (such as, performing an AIJ backup, enabling or disabling AIJ logging, etc.). User Action: No user action is required. This message is informational only.
40.79 – AIJMODSWTCH
AIJ switch-over suspended - add new journal or backup current Explanation: The AIJ switch-over operation is suspended and performing the requested operation will not succeed and possibly result in the database being shutdown. User Action: Add a new AIJ journal or, if possible, backup the existing journals.
40.80 – AIJMOREWORK
active transactions will be aborted if you terminate recovery Explanation: One or more active transactions will be aborted if AIJ recovery is terminated. User Action: No user action is required. This message is informational only. This message supplements the AIJNXTSEQ message.
40.81 – AIJNAMREQ
AIJ name or filespec necessary for modify or delete operations Explanation: In order to modify or delete an existing AIJ journal, either the AIJ name or the exact filename specification are mandatory. User Action: Please specify either the AIJ name or the exact filename specification, including VMS version number.
40.82 – AIJNOACTIVE
there are no active transactions Explanation: Upon completion of the roll-forward operations for the current AIJ file, no transactions remain active. The AIJ recovery process can be terminated without the loss of transaction data. User Action: No user action is required. This message is informational only.
40.83 – AIJNOBACKUP
AIJ contains no transactions that qualify for backup Explanation: An attempt was made to backup an after-image journal file that does not have any records that qualify to be backed up. This situation occurs if the oldest active checkpoint record is in the first block of the AIJ. This restriction is necessary to guarantee that all transactions for this process will be recoverable in the event of unexpected process failure. This message is applicable only if the "fast commit" feature is enabled. User Action: The offending process(es) need to commit or rollback their current transaction or unbind from the database.
40.84 – AIJNOENABLED
after-image journaling has not yet been enabled Explanation: The after-image journal roll-forward operation has completed, but AIJ logging has not yet been enabled. This message is a reminder to the user to enable AIJ logging, if desired. User Action: If AIJ logging is desired, AIJ logging should be enabled. Otherwise, no user action is required. This message is informational only.
40.85 – AIJNOEXT
extraction of this journal must start with sequence <num> Explanation: The AIJ file supplied was created subsequent to the expected AIJ journal. Usually, this condition occurs for the following reasons: 1) an incorrect AIJ file or VMS file "version" was specified, 2) the supplied AIJ file was not created for this database, 3) AIJ logging was disabled and then later enabled, or 4) a transaction is continued in this journal from a previous journal. User Action: This is a fatal condition; extraction of the AIJ journal CANNOT start with this journal. You MUST start recovery with the AIJ journal indicated by the preceeding AIJSEQAFT or AIJSEQPRI message.
40.86 – AIJNOOVR
AIJ initialization not possible when journals have not been overwritten Explanation: An attempt was made to perform an after-image initialization when none of the active AIJ journals have been overwritten. Resetting an AIJ journal that has not been overwritten is not possible, because AIJ data will be lost, making the database non-recoverable. User Action: None.
40.87 – AIJNORCVR
recovery must start with journal sequence <num> Explanation: The AIJ file supplied was created subsequent to the expected AIJ journal. Usually, this condition occurs for the following reasons: 1) an incorrect AIJ file or VMS file "version" was specified, 2) the supplied AIJ file was not created for this database, 3) AIJ logging was disabled and then later enabled, or 4) a transaction is continued in this journal from a previous journal. User Action: This is a fatal condition; recovery of the AIJ journal CANNOT start with this journal. You MUST start recovery with the AIJ journal indicated by the preceeding AIJSEQAFT or AIJSEQPRI message.
40.88 – AIJNOTENA
After-image journaling is not enabled Explanation: Partial "by area" backups are of very limited use if after-image journaling is disabled. User Action: Use the RMU Set After command or the SQL ALTER DATABASE statement to enable after-image journaling.
40.89 – AIJNOTFND
expected after-image file <str> not found Explanation: The after-image journal file could not be opened. Either the file has been deleted, renamed, or is corrupted. User Action: Execute an RMU BACKUP operation on your database and reinitialize a journal file.
40.90 – AIJNOTON
AIJ journaling was not active when the database was backed up Explanation: AIJ journaling was not activated when the database backup was created. However, AIJ journaling may have been activated subsequent to the database backup, and AIJ recovery may be necessary to fully complete the database restore operation. User Action: None.
40.91 – AIJNXTFIL
enter the next AIJ file name, or enter return to terminate: Explanation: Enter the name of another AIJ file to be rolled forward. If no AIJ file name is entered, the roll-forward operation is terminated. User Action: Enter the name of the next AIJ file to be rolled forward. If you wish to terminate the roll-forward operation, simply hit return <CR>.
40.92 – AIJNXTSEQ
to continue this AIJ file recovery, the sequence number needed will be <num> Explanation: This message informs the user what the next AIJ file sequence number will be. AIJ file sequence numbers are modified for a variety of reasons (such as, performing an AIJ backup, enabling or disabling AIJ logging, etc.). User Action: No user action is required. This message is informational only.
40.93 – AIJONEDONE
AIJ file sequence <num> roll-forward operations completed Explanation: The roll-forward operations for the AIJ file with the indicated sequence number have been successfully completed. Note that in some cases, no transactions may have been applied; this is normal. User Action: No user action is required. This message is informational only.
40.94 – AIJOPTRST
Optimized AIJ journal will not be applied during restart Explanation: An optimized AIJ was encountered during a restarted AIJ roll-forward operation. Since an optimized AIJ journal only contains 1 real transaction, nothing in the AIJ journal can be applied if transaction recovery has not yet commenced. Therefore, the AIJ journal is simply read but not applied to the database. User Action: None.
40.95 – AIJOPTSUC
AIJ optimization completed successfully Explanation: An AIJ optimization has completed successfully. User Action: No user action is required.
40.96 – AIJOVRINIT
overwritten AIJ journal <str> has been re-initialized Explanation: An "overwritten" AIJ journal has been re-initialized. This makes the AIJ journal immediately available for future re-use. User Action: None.
40.97 – AIJPREPARE
<num> of the active transactions prepared but not yet committed or aborted Explanation: Upon completion of the roll-forward operations for the current AIJ file, more than 1 transaction remains active AND prepared. That is, the commit or roll-back information either resides in the next AIJ file to be processed, or the action can be determined using DECdtm upon completion of the recovery operation. User Action: No user action is required. This message is informational only.
40.98 – AIJQUIETPT
AIJ quiet-point backup required when commit-to-journal enabled Explanation: You attempted to perform a no-quiet-point back up of an after-image journal file when the commit-to-journal feature was enabled. User Action: Either disable the commit-to-journal feature, or use the quiet-point AIJ backup mechanism.
40.99 – AIJRECARE
Recovery of area <str> starts with AIJ file sequence <num> Explanation: To complete the database restore operation, AIJ recovery of the indicated area should be performed starting with the AIJ file that contains the indicated sequence number. If there are no AIJ files to be recovered, then the database restore operation is complete. User Action: None.
40.100 – AIJRECBEG
recovering after-image journal "state" information Explanation: When performing a full database restore operation, the restore utility attempts to recover the "state" information of any after-image journals that were available at the time of the backup operation. Recovering the after-image journal information permits subsequent "automatic" (i.e., hands-off) AIJ-recovery operations. User Action: No user action is required.
40.101 – AIJRECEND
after-image journal "state" recovery complete Explanation: The after-image journal "state" recovery operation has completed. User Action: No user action is required.
40.102 – AIJRECESQ
AIJ roll-forward operations terminated due to sequence error Explanation: Instead of specifying another AIJ file to be rolled forward, the AIJ roll-forward operations were prematurely terminated because the AIJ files were out of sequence. In this case, it is possible that one or more active transactions were aborted by the system. User Action: Redo the RMU/RECOVER with the correct sequence of AIJ files, continue rolling forward with the next AIJ in the correct sequence, or specify /COMMIT=CONTINUE to continue with the next AIJ file after skipping the missing AIJ file.
40.103 – AIJRECFUL
Recovery of the entire database starts with AIJ file sequence <num> Explanation: If you require restoration of the database to the most recent state, AIJ recovery should be performed starting with the AIJ file that contains the indicated sequence number. If there are no AIJ files to be recovered, then the database restore operation is complete. User Action: None.
40.104 – AIJRECTRM
AIJ roll-forward operations terminated at user request Explanation: Instead of specifying another AIJ file to be rolled forward, the user specified that AIJ roll-forward operations should be prematurely terminated. In this case, it is possible that one or more active transactions were aborted by the system. User Action: No user action is required. This message is informational only.
40.105 – AIJREMCUR
cannot remount the current AIJ journal "<str>" Explanation: An attempt was made to remount the AIJ journal currently in use. User Action: Disable AIJ journaling first, or try to remount the AIJ journal when the journal is no longer in use.
40.106 – AIJREMMOD
cannot remount AIJ journal "<str>" due to hard data loss Explanation: An attempt was made to remount an AIJ journal that has experienced data loss. This is not permitted. User Action: None.
40.107 – AIJREMOK
AIJ journal "<str>" is already fully accessible Explanation: An attempt was made to remount an AIJ journal that is already fully accessible. User Action: None.
40.108 – AIJROOSEQ
starting after-image journal sequence number required is <num> Explanation: The after-image journal sequence number indicated corresponds to the first AIJ file that can be rolled forward. If the sequence number of the AIJ file to be rolled forward does not exactly match the indicated sequence number, no transactions will be applied. User Action: No user action is required. This message is informational only.
40.109 – AIJRSTAVL
<num> after-image journal(s) available for use Explanation: This message indicates the number of after-image journals that were successfully restored. One or more of these journals may actually be modified, but all of them are valid after-image journals for the database. User Action: No user action is required.
40.110 – AIJRSTBAD
journal is currently marked inaccessible Explanation: The journal that is in the process of being restored was marked as being inaccessible. Consequently, this journal cannot be restored. User Action: No user action is required.
40.111 – AIJRSTBEG
restoring after-image journal "state" information Explanation: When performing a full database restore operation, the restore utility attempts to restore the "state" information of any after-image journals that were available at the time of the backup operation. Restoring the after-image journal information permits subsequent "automatic" (i.e., hands-off) AIJ-recovery operations. User Action: No user action is required.
40.112 – AIJRSTDEL
journal "<str>" filename "<str>" has been removed Explanation: The indicated after-image journal could not be successfully restored. Therefore, the information regarding the journal has been removed from the database. Note, however, that the specified filename was NOT deleted. User Action: No user action is required.
40.113 – AIJRSTEND
after-image journal "state" restoration complete Explanation: The after-image journal restore operation has completed. User Action: No user action is required.
40.114 – AIJRSTINC
after-image journal sequence numbers are incompatible Explanation: The sequence number stored in the header of the after-image journal does not correspond to the sequence number stored in the database. Typically, this situation occurs if the after-image journal was modified or backed up AFTER the database backup was made. As a result, the journal information cannot be restored in the database. However, the on-disk after-image journal may be acceptable for subsequent roll-forward operations. User Action: No user action is required.
40.115 – AIJRSTJRN
restoring journal "<str>" information Explanation: The specified after-image journal was available when the database was originally backed up, and restoration of the journal "state" will be attempted. User Action: No user action is required.
40.116 – AIJRSTMOD
<num> after-image journal(s) marked as "modified" Explanation: This message indicates the number of after-image journals that were successfully restored, but contain data that needs to be backed up. User Action: No user action is required.
40.117 – AIJRSTNMD
journal has not yet been modified Explanation: The indicated after-image journal has not yet been modified, and is available for immediate use. Note that at least one unmodified after-image journal is required before journaling can be enabled. User Action: No user action is required.
40.118 – AIJRSTROOT
original database root file "<str>" still exists Explanation: An after-image journal cannot be restored if the database for which it was originally created still exists. User Action: No user action is required.
40.119 – AIJRSTSEQ
journal sequence number is "<num>" Explanation: The indicated after-image journal was successfully restored. This message identifies the sequence number of the journal. User Action: No user action is required.
40.120 – AIJRSTSUC
journal "<str>" successfully restored from file "<str>" Explanation: The indicated after-image journal was successfully restored. User Action: No user action is required.
40.121 – AIJSEQAFT
incorrect AIJ file sequence <num> when <num> was expected Explanation: The AIJ file supplied was created subsequent to the expected AIJ file. Usually, this condition occurs for the following reasons: 1) an incorrect AIJ file or VMS file "version" was specified, 2) the supplied AIJ file was not created for this database, or 3) AIJ logging was disabled and then later enabled. User Action: The utility will prompt for confirmation that the supplied AIJ file is valid. If AIJ logging was disabled and then later enabled without any intervening database transaction activity, then confirming the AIJ file will permit the roll-forward operation to continue applying all transactions contained in the AIJ file. Otherwise, the AIJ file should be rejected and the correct AIJ file specified. Should confirmation be given for an incorrect AIJ file, no transactions will be applied.
40.122 – AIJSEQBCK
cannot find an AIJ journal with sequence number <num> Explanation: A "by-sequence" after-image journal backup operation was attempted with a sequence number that did not currently exist for any known AIJ journal. User Action: Specify a valid AIJ sequence number, or perform a full AIJ backup by not specifying the "by-sequence" command qualifier.
40.123 – AIJSEQPRI
AIJ file sequence number <num> created prior to expected sequence <num> Explanation: The after-image journal supplied was created prior to the expected AIJ file. Usually, this condition occurs for the following reasons: 1) an incorrect AIJ file or VMS file "version" was specified, 2) the supplied AIJ file was not created for this database, or 3) a database backup was performed subsequent to the AIJ backup. User Action: No user action is required. This message is informational only. The AIJ roll-forward operation will continue to completion, although no transactions will be applied from the AIJ file.
40.124 – AIJSIGNATURE
standby database AIJ signature does not match master database
Explanation: The number of AIJ journal slots ("reserved"), or
the specific journal allocation sizes, are not identical on both
the master and standby databases.
User Action: Make sure both the master and standby database AIJ
journal configurations are identical. Ensure that the AIJ
journal device "cluster size" is identical on both the master
and standby databases.
40.125 – AIJTADINV
after-image file "<str>", contains incorrect time stamp expected between <time> and <time>, found: <time> Explanation: The time stamp on the after-image journal file specifies a time later than the current time or a time earlier than the time that the database could have been created. Such a time is incorrect. Verification of the root continues. User Action: Execute an RMU BACKUP operation on your database and reinitialize a journal file.
40.126 – AIJTERMINATE
inaccessible AIJ file forced image exit to protect database Explanation: To maintain the integrity of the database, the database system has forced your image to exit. An error has been encountered with one or more of the after-image journals that could jeopardize your ability to recover the database should a it become necessary to restore and recover the database. Until the journaling problem has been remedied no further updates to the database are allowed. User Action: The RMU or DBO /DUMP/HEADER=JOURNAL command will display the current state of the journals. Various remedies are possible, depending on the error encountered. Contact Oracle Support if you have questions on how to fix the problem. Typically, disabling and re-enabling journaling is the simplest way to restore operation of the database. This can be done using the DBO or RMU SET AFTER command, or from interactive SQL. After the journaling problem has been resolved a full database backup must be done to ensure that the database can be restored and recovered successfully in the future.
40.127 – AIJVNOSYNC
AIJ file <str> synchronized with database Explanation: When recovering a database for which AIJ journaling was enabled, it may be necessary to synchronize information in the AIJ file with information in the database root file. This is necessary to ensure that subsequent AIJ recovery operations are successful. User Action: No user action is required. This message is informational only.
40.128 – AIJWASON
AIJ journaling was active when the database was backed up Explanation: AIJ journaling was activated when the database backup was created. Therefore, AIJ recovery may be necessary to fully complete the database restore operation. User Action: None.
40.129 – AIJ_DISABLED
after-image journaling is being disabled temporarily for the Convert operation Explanation: The user's database has after-image journaling enabled. Journaling must be disabled during an RMU convert operation. User Action: Use Rdb to disable after-image journaling on the database before conversion, or convert the database knowing that any existing backups of the database that exist will be obsolete! A full backup must be done immediately upon completion of the RMU Convert command.
40.130 – AIPBADNXT
in area inventory page <num> the pointer to the next area inventory page <num> does not point to another area inventory page, bad page or wrong pointer Explanation: An area inventory page contains a bad pointer to the next area inventory page. The page referred to is not an AIP page. User Action: Correct the error with the RMU Restore command and verify the database again.
40.131 – AIPENTMBZ
entry <num> in area inventory page <num> has never been used, but is not empty. It should contain all zeroes Explanation: The page contains an entry that is not in use, but is not empty. User Action: Correct the error with the RMU Restore command and verify the database again.
40.132 – AIPLAREID
area inventory page <num> entry <num> contains a reference to logical area <num> that is nonexistent Explanation: The AIP page entry contains a reference to a logical area of which it is not a part. User Action: Correct the error with the RMU Restore command and verify the database again.
40.133 – ALRATTDB
command not allowed - already attached to a database Explanation: The RMU ALTER ATTACH command is being issued when another database is currently attached. User Action: Issue an RMU ALTER DETACH command to detach from the current database. Then reissue the RMU ALTER ATTACH command.
40.134 – ALSACTIVE
Database replication is active Explanation: Certain database operations, such as terminating the AIJ Log Server, cannot be performed while database replication is active. User Action: Terminate database replication and re-attempt the operation.
40.135 – ALSNACTIVE
Database replication is not active Explanation: Database replication is not active for the specified database. User Action: Verify that the database is being replicated.
40.136 – ALSNAVAIL
"Hot Standby" not available or improperly installed Explanation: "Hot Standby" cannot be started because it has not been installed. User Action: Make sure the "Hot Standby" component has been properly installed.
40.137 – ALSNBEGUN
database replication has not previously been started Explanation: Database replication has not yet been started for this database. User Action: You specified the replication-start command without identifying the master or standby database. This type of command can only be used when database replication has been previously started.
40.138 – ALSNOOUT
AIJ Log Server does not have an output file Explanation: The AIJ Log Server process does not have an output file associated with it. User Action: Use the /OUTPUT qualifier to specify an output filename when the AIJ Log Server process is started.
40.139 – ALSNRUNNING
AIJ Log Server process is not active Explanation: The AIJ Log Server process is not running on the current node. User Action: Verify that the AIJ Log Server has been started.
40.140 – ALSRUNNING
AIJ Log Server process is already running Explanation: The AIJ Log Server process has already been started on the current node. User Action: No action is required.
40.141 – ALTWARN
***** WARNING! ***** Marking a storage area or page consistent does not remove the inconsistencies. Remove any inconsistencies or corruptions before you proceed with this action. Explanation: This command merely changes the marked state of the area or page from inconsistent to consistent. The object was originally marked inconsistent because its contents are potentially incompatible with the remainder of the database. Normal usage is to recover from after image journals to return the object to consistency. Changing the object's state to consistent without actually updating its contents may cause unexpected failures or wrong results to be returned when the object is used. User Action: Either use recovery from after image journals to make the area or page consistent, or locate and correct any inconsistencies before proceeding. One may also proceed, as a temporary measure, to evaluate the situation, or to provide access for corrective action, or until corrective action can be taken.
40.142 – AREABUSY
usage of storage area <str> conflicts with a co-user Explanation: You attempted to ready an area that is already being accessed by another user, and that usage mode is incompatible with the usage mode you requested. User Action: Wait until the storage area you requested is available, and try again, or ready the area with the WAIT option.
40.143 – AREAEXCL
Nothing restored for storage area - <str> Explanation: Either the area that was to be restored was excluded from the specified backup file, or this incremental restore contained no data for the area. User Action: If you expected some data to be restored, select a backup that contains the area and restore the area from that backup, or restore the entire database from a full backup that includes all defined areas.
40.144 – AREAINCON
area <str> is marked as inconsistent. Explanation: The area has been restored from a backup but not recovered. User Action: Make the area consistent by using the RMU Recover command.
40.145 – AREAISCRPT
Area <str> is already marked corrupt. Explanation: You are trying to mark as corrupt an area that is already marked corrupt. User Action: Specify a different area, and enter the command again.
40.146 – AREAUNTILIGN
/UNTIL qualifier ignored when /AREA qualifier specified Explanation: The /UNTIL qualifier to the AIJ roll-forward utility is ignored when the /AREA qualifier is also specified. User Action: None necessary.
40.147 – AREA_CORRUPT
storage area <str> is corrupt Explanation: The storage area has been corrupted by an abnormal termination of a BATCH UPDATE run unit. It cannot be readied. User Action: Either try to fix the problem by verifying the area and then altering the corrupt pages or reload/restore the area.
40.148 – AREA_DELETED
area is not active or was previously deleted Explanation: An attempt was made to ready an area which does not exist.
40.149 – AREA_INCONSIST
storage area <str> is inconsistent Explanation: The storage area has been marked inconsistent with the rest of the database. It cannot be readied. User Action: Recover the area to make it consistent.
40.150 – AREA_RESTRUCT
storage area <str> is under restructure Explanation: An attempt was made to ready an area that is either presently being or in the recent past has been restructured. User Action: See your DBA to have the areas released.
40.151 – ASCLENGTR
ASCII length must be greater than 0 Explanation: The value supplied as the ASCII length must be greater than 0. User Action: Issue the command with a length greater than 0.
40.152 – AUDITRECINV
Invalid record read from audit file, record number <num>. Explanation: An audit file record is invalid and cannot be loaded. Correct the problem or repeat the load starting from the record after the bad record using /SKIP=n where n is the record number in the error message.
40.153 – AUTH_FAILURE
Network authentication/authorization failure. Invalid username/password or client not authorized to access specified service. Explanation: The convert operation cannot be committed. Either the convert operation was already committed or the database was created using the current version of Oracle Rdb. User Action: Use the RMU Show Version command to verify that the correct version of RMU is executing. Use the RMU Verify command to determine if the database has already been converted.
40.154 – AUTH_REMEXEC
Explicit authentication required for remote executor creation Explanation: An attempt was made to create an RMU executor on a remote node by a client that was implicitly authenticated. When executing a parallel backup across a cluster the logicals SQL_USERNAME and SQL_PASSWORD must be defined. User Action: Define SQL_USERNAME and SQL_PASSWORD logicals to be your username and password, respectively.
40.155 – BACFILCOR
Backup file is corrupt Explanation: The backup file is corrupt, most likely as a result of truncation. User Action: This error is fatal. No restoration of the database is possible using this backup file.
40.156 – BACFILCOR_01
Converted root is version <num>.<num>, expected version <num>.<num> Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.157 – BACFILCOR_02
Unable to read buffer from backup file Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.158 – BACFILCOR_03
Unexpected condition after end of volume detected Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.159 – BACFILCOR_04
Buffer from backup file is for area <num>, expected area <num> Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.160 – BACFILCOR_05
Data page was restored before the data header Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.161 – BACFILCOR_06
Data page cannot be restored without data header Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.162 – BACFILCOR_07
Unrecognized backup file record type Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.163 – BACFILCOR_08
Backup file record type must be subschema or security schema Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.164 – BACFILCOR_09
Invalid ID for ACL or name table Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.165 – BACFILCOR_10
Page <num> is not in the backup file Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.166 – BACKUPCHANGE
unexpected end of backup file on <str> Explanation: Block read from backup file is not from current backup file. Likely, this is the result of corrupted media from an incomplete backup. User Action:
40.167 – BADABMFET
error fetching ABM page <num> in area <str> Explanation: An error occurred during an attempt to fetch the given ABM page. This could be because of a corruption in the RDB$SYSTEM area. User Action: If there is a corruption that causes this error, correct the error with the RMU Restore command, and verify the database again.
40.168 – BADABMIND
max set bit index of area bit map page <num> for logical area <num> out of range expected to be in range 0 : <num>, found: <num> Explanation: The maximum bit set index of the current area bit map page is out of range. User Action: Correct the error with the RMU Restore command and verify the database again.
40.169 – BADABMPAG
error verifying ABM pages Explanation: Errors verifying ABM pages. User Action: Verify the logical area corresponding to the corrupt ABM pages.
40.170 – BADABMPTR
invalid larea for ABM page <num> in storage area <num>. The SPAM page entry for this page is for a different larea. SPAM larea_dbid : <num> page larea_dbid: <num>. Explanation: The logical area contained in the page tail of an ABM page is different than the logical area indicated for the page in the SPAM page. User Action: Correct the error with the RMU Restore command and verify the database again.
40.171 – BADACCMOD
Access mode of <num> is invalid for area <str> (FILID entry <num>) Explanation: The FILID entry contains an invalid access mode value. User Action: Restore and recover the database from backup.
40.172 – BADAIJACE
after-image journal is electronic cache Explanation: You have attempted to use the AIJ Cache for Electronic disk for an operation which is not supported. For instance, you may have tried to use the electronic cache as the roll-forward journal, which is incorrect. User Action: Do not use the AIJ Cache for Electronic disk for day-to-day operations. Use the disk-based after-image journals for all roll-forward or AIJ operations.
40.173 – BADAIJBCK
previous AIJ backup did not complete Explanation: It appears that the previous AIJ file backup process, which was started on the indicated date/time, either failed or was prematurely terminated by the user. User Action: No user action required. The current AIJ backup will backup the complete AIJ file to ensure there is no loss of transactions. The AIJ file created by the failed backup utility MUST be preserved; DO NOT discard the AIJ backup file created by the failed backup utility. Even though the AIJ backup failed, the AIJ backup file must be used for successful roll-forward operations.
40.174 – BADAIJFILE
illegal after-image journal format or journal incorrectly mounted Explanation: The file you specified does not appear to be an after-image journal file. For example, when performing an AIJ roll-forward operation using an after-image journal on a magnetic tape, this problem will occur if the tape is incorrectly mounted. User Action: Check the file name and try again. Verify that a magnetic tape was correctly mounted.
40.175 – BADAIJID
the after-image journal contains a bad identification expected: "<str>", found: "<str>" Explanation: OPEN record of the after-image journal contains the wrong id. User Action: Your after-image journal file cannot be used to roll forward your database. You should backup your database and create a new after-image journal file.
40.176 – BADAIJPN
There is no name associated with AIJ entry <num>. Explanation: There is an active entry in the table of AIJ files in the root file, but there is no name associated with the AIJ entry. User Action: Disable journals, then correct or redefine the journals with the RMU Set After_Journal command or SQL ALTER DATABASE statements. Next, reenable journals and perform a full and complete backup to insure future recoverability.
40.177 – BADAIJSEQ
AIJ file is incorrect for roll-forward operations Explanation: The specified AIJ file is not the correct file to be rolled-forward. Usually, this condition occurs for the following reasons: 1) an incorrect AIJ file or VMS file "version" was specified, 2) the supplied AIJ file was not created for this database, or 3) AIJ logging was disabled and then later enabled. User Action: No user action is required. This message is informational only.
40.178 – BADAIJTYP
the first block of the after-image file should be of an OPEN type expected: "O", found "<str>" Explanation: The after-image journal file (AIJ) does not contain an OPEN record in its first block, and will not be usable to roll forward your database. User Action: Execute an RMU BACKUP operation on your database and reinitialize a journal file.
40.179 – BADAIJUNTIL
date specified by /UNTIL (<time>) has not yet been reached Explanation: The date and time specified by the /UNTIL command qualifier had not yet been reached when after-image journal roll-forward operations were completed. User Action: Another after-image journal, if any, might have to be rolled forward to ensure that all transactions have been applied up to the specified date and time. If no more AIJ files are available, the AIJ roll-forward operations are complete.
40.180 – BADAIJVER
after-image journal version is incompatible with the runtime system Explanation: Your after-image journal file was created with an incompatible version of the software. User Action: Your after-image journal file cannot be used with the version of the software you have installed on your machine. Make sure you are using the correct AIJ journal, or if "multi-version" software is installed, make sure you are using the correct software version.
40.181 – BADAIPARE
storage area <str> contains references to a logical area reserved for the AIP pages Logical area <num>, in storage area <num> Explanation: A storage area other then storage area 1 references the reserved logical area 16385. User Action: Correct the error with the RMU Restore command and verify the database again.
40.182 – BADAIPFET
error fetching AIP page <num> in area <str> Explanation: An error occurred during a fetch of an AIP page from disk. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.183 – BADAIPPAG
error verifying the AIP pages Explanation: Because the AIP pages are used to build the information about the database they must be correct in order to pursue the verification. User Action: Restore the database from the last backup.
40.184 – BADASCTOID
"<str>" is not a valid user identifier Explanation: An error occurred when the rights database was accessed to translate an identifier name to a binary identifier. User Action: See the secondary error message, and supply a valid user identifier.
40.185 – BADBKTDBK
Area <str> Bad logical area DBID in hash bucket logical dbkey <num>:<num>:<num>. Expected <num>, found <num> (hex). Explanation: An invalid logical area identifier was found in the hash bucket dbkey. The id should match that of the hashed index currently being verified. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.186 – BADBKTFLG
Area <str> Flags in hash bucket at logical dbkey <num>:<num>:<num> are not valid. Expected <num>, found <num> (hex). Explanation: The hash bucket flags are not currently used. They should be zero. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.187 – BADBKTFRA
Area <str> Tried to read past end of hash bucket fragment. Fragment at logical dbkey <num>:<num>:<num> is corrupt. Explanation: The length of the given hash bucket is greater than expected. The hash bucket is probably corrupt. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.188 – BADBKTIID
Area <str> Bad index DBID in hash bucket at logical dbkey <num>:<num>:<num>. Expected <num>, found <num> (hex). Explanation: An invalid storage type identifier was found in the hash bucket. The id should be that of the hashed index currently being verified. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.189 – BADBKTRDY
error readying larea <num> for bucket of index <str> Explanation: The logical area corresponding to the hash bucket for the hash index could not be readied. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. Rebuild the index if it is corrupt.
40.190 – BADBLBHDR
Header information could not be retrieved for segmented string. Explanation: An attempt to get the statistics stored at the beginning of a segmented string failed. User Action: Restore and recover the storage area from a backup.
40.191 – BADBLKSIZE
<str> has inconsistent block size Explanation: Block read from backup file had a different block size than originally written. User Action:
40.192 – BADBNDPRM
bad bind parameter <str> value "<str>" Explanation: The logical bind parameter value is invalid. User Action: See the secondary error message for more information.
40.193 – BADBNDPRM0
bad bind parameter Explanation: A logical bind parameter value is invalid. User Action: See the secondary error message for more information. Because of unfortunate logistics, no further information is available at this point, check the Monitor log for more info.
40.194 – BADBOUNDS
value not in range <num> to <num> Explanation: The value of the translated logical name is not in the range of acceptable values. User Action: Delete the logical name, or redefine it with a value in the acceptable range.
40.195 – BADBUFSIZ
buffer size (<num>) is too small for largest page size (<num>) Explanation: Specified buffer size is too small to hold even one page from the storage area with the largest page size.
40.196 – BADCCHNAM
record cache "<str>" does not exist Explanation: The specified record cache is not defined in the database. User Action: Please specify a valid record cache name.
40.197 – BADCLTSEQALLOC
<num> client sequences allocated in the root is less than <num> client sequences defined in RDB$SEQUENCES. Explanation: There is an inconsistency between the number of client sequences allocated in the database root and the number of client sequences defined in the system table RDB$SEQUENCES. This causes database corruption.
40.198 – BADCLTSEQMAXID
<num> client sequences allocated in the root is less than the maximum client sequence id of <num> in RDB$SEQUENCES. Explanation: There is an inconsistency between the number of client sequences allocated in the database root and the number of client sequences defined in the system table RDB$SEQUENCES. This causes database corruption.
40.199 – BADCLTSEQUSED
<num> client sequences in use in the root does not equal <num> client sequences defined in RDB$SEQUENCES. Explanation: There is an inconsistency between the number of client sequences in use in the database root and the number of client sequences defined in the system table RDB$SEQUENCES. This causes database corruption.
40.200 – BADCPLBIT
storage area <str> belongs to a single-file database Explanation: The COUPLED bit in the FILID of this area is incorrect. It is currently marked as belonging to a single-file database, while there are many areas in the database. User Action: Correct the error with the RMU Restore command, and verify the database again.
40.201 – BADCURAIJ
The entry that is identified as the current AIJ is not active. Explanation: The entry in the list of AIJ files that is marked as the current AIJ in the root file is not an active entry. User Action: Disable journals, then correct or redefine the journals with the RMU Set After_Journal command or SQL ALTER DATABASE statements. Finally, reenable journals and perform a full and complete backup to insure future recoverability.
40.202 – BADDATA
error in block <num> of <str> detected during backup Explanation: Corruption of a block of the backup file was detected. User Action: None. But you should investigate the possible sources of the corruption.
40.203 – BADDATDEF
illegal default format for date string Explanation: The logical name, SYS$DATE_INPUT, represents the default format for a date string. It is a three-character field (MDY,DMY, etc.), in which M = month, D = day, and Y = year. User Action: Redefine the logical name with a legal date-string format.
40.204 – BADDBID
The DBID field for FILID entry number <num> does not match its position in the FILID list in the root. Instead, the DBID field contains <num>. Explanation: The FILID list contains information about all areas in the database. The DBID field in an entry for an area in the FILID list should match its position in the list. The only exception is for inactive entries in the list, where the DBID field may contain 0. This error indicates that the entry does not contain a valid value in its DBID field. User Action: Restore and recover the database from backup.
40.205 – BADDBKFET
Error fetching dbkey <num>:<num>:<num> Explanation: An error occurred during a fetch of the line with the given dbkey User Action: Verify the page in question, and check if the database needs to be restored.
40.206 – BADDBNAME
can't find database root <str> Explanation: The database root file you specified could not be accessed. User Action: Examine the associated messages to determine the reason for failure.
40.207 – BADDBPRO
<str> file does not belong to database <str> found references to database <str> Explanation: While validating the prologue page of an area it was discovered that it contained references to a database other than the one being verified. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again. If this error is encountered while performing an RMU Convert command, then it indicates that the database file was manually moved from its original location. If this is the case, then its new location must be made known through the use of an RMU Convert options file and the RMU Convert command must be rerun using the options file.
40.208 – BADDBTDBK
Area <str> Bad logical area DBID in duplicate hash node dbkey <num>:<num>:<num>. Expected <num>, found <num> (hex). Explanation: An invalid logical area identifier was found in the duplicate hash node dbkey. The id should be that of the hashed index currently being verified. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.209 – BADDENSITY
The specified tape density is invalid for this device Explanation: This tape device does not accept the specified density. User Action: Specify a valid tape density for this device or use the default density.
40.210 – BADDHSHDT
error fetching data record from duplicate hash bucket Explanation: An error occurred during a fetch of a data record from a duplicate hash bucket. User Action: Ascertain if the index is corrupt by manually verifying related system records and hash buckets after dumping pages of the database. If the index is corrupt, rebuild it.
40.211 – BADDSGLEN
Bad length of data segment in segmented string. Expected: <num> found: <num>. Explanation: The actual length of a data segment (found part) of a segmented string is not equal to the length of the data segment stored in the corresponding pointer segment (expected part) of the segmented string. See accompanying messages for the segmented string context. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.212 – BADDSGPSG
Bad number of data segments found in segmented string pointer segment. Expected: <num> found: <num>. Explanation: The actual number of data segments (found part) in a pointer segment of a segmented string is greater than the number of data segments stored in the pointer segment (expected part). See accompanying messages for the segmented string context. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.213 – BADENDLVL
Last b-tree level at level <num> had non-null next pointer. Last b-tree node points to logical dbkey <num>:<num>:<num>. Explanation: An index verification error was found when verifying the next b-tree node of a b-tree node. The next b-tree node of a b-tree node B at level L is the next level L node on the right of b-tree node B. The rightmost node at level L should point to the NULL dbkey. In this case it pointed to a different dbkey.
40.214 – BADENTITY
entity "<str>" does not exist in this database. Explanation: The specified database entity is invalid. User Action: Correct the error and try again.
40.215 – BADEXECCOUNT
Executor count is out of range. Explanation: The plan file specified either too many or too few executors. For the RMU Load command, there must be at least one executor and at most 255 executors. User Action: Specify an appropriate number of executors in the plan file.
40.216 – BADEXTPCT
Extend percent value of <num> is invalid for area <str> (FILID entry <num>) Explanation: The FILID entry contains an invalid extend percent value. User Action: Restore and recover the database from backup.
40.217 – BADFIELDID
Column does not exist in table "<str>". Explanation: The specified column is not in the table. User Action: Correct the error and try again.
40.218 – BADFILID
errors in FILID entries, unable to continue the verification Explanation: The errors for some FILID entries are too serious to continue the verification of the database. User Action: Restore the database from the last backup.
40.219 – BADFILTYP
database file must have extension "<str>" Explanation: All database files must have the specified file type. User Action: You might be attempting to access a non-database file. If not, rename or copy the database file to have the proper type.
40.220 – BADFNMAIJ
after-image journal file contains references to wrong database expected: "<str>", found: "<str>" Explanation: The database root file name stored in your after-image journal file differs from the database being verified. This could be caused by the root file being moved or restored to a different location. It also could mean that the journal file is being used by other databases. This could occur when non-system-wide concealed logical names are used to create databases or AIJ files. User Action: There is a danger that the AIJ is for a database other than the database being verified. You should backup your database and create a new after-image journal file.
40.221 – BADFRACHN
area <str>, page <num>, line <num> unexpected non-secondary fragment Explanation: The current fragment in the fragment chain does not indicate that this is a 'secondary' fragment and the 'primary' fragment in the chain has already been encountered. This message is followed by a line indicating the previous record occurrence of the invalid fragment chain. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.222 – BADFRAEND
area <str>, page <num>, line <num> bad last fragment pointer expected: <num>:<num>, found: <num>:<num> Explanation: The 'final' fragment of the fragment chain was encountered but its fragment pointer did not point back to the 'primary' fragment of the chain. The chain has probably been corrupted; that is, one of the fragments in the chain points to the wrong 'next' fragment. This message is followed by a line indicating the previous record occurrence of the invalid fragment chain. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.223 – BADFRALEN
area <str>, page <num>, line <num> bad expanded fragment length expected: <num>, found: <num> Explanation: The total record length stored in the 'primary' fragment indicated a record length different from the total amount of data in the actual fragment chain. The total record length in the primary fragment may be incorrect or one of the fragments in the chain may have the wrong length or the chain has been corrupted; that is, one of the fragments in the chain points to the wrong 'next' fragment. This message is followed by a line indicating the previous record occurrence of the invalid fragment chain. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.224 – BADFRAPTR
area <str>, page <num>, line <num> bad fragment chain pointer expected <num> through <num>, found page number <num> Explanation: The page number in the fragment pointer of the fragmented storage record points to a page that is out of the page range for the storage area. This message is followed by a line indicating the previous record occurrence of the invalid fragment chain, unless the error is detected in the primary storage segment of the chain. The 'previous segment' line will not appear when this same error occurs, as the bad secondary storage segment is verified out of fragment chain context during segment verification. That is, this message will usually occur twice; once with the 'previous segment' message when the fragment chain is walked to collect the fragmented storage record together for verification, and once without the 'previous segment' message, when the fragment is verified out of fragment chain context. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.225 – BADFRASEG
area <str>, page <num>, line <num> Bad fragment chain. Expected fragment does not exist. Explanation: The line number in the fragment pointer of the previous storage record in the fragment chain points to a nonexistent line. This message is followed by a line indicating the previous record occurrence of the invalid fragment chain. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.226 – BADHDADBK
Bad logical area DBID in hashed index data record dbkey <num>:<num>:<num> Found <num> (hex). Explanation: During the verification operation, an invalid logical area identifier was found in the hashed index data record dbkey. The id should match the relation on which the hashed index is defined. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.227 – BADHSHBKT
area <str>, page <num>, line <num> hash bucket DBID differs from expected DBID expected: <num> (hex), found: <num> (hex) Explanation: The storage record type database id for a hash bucket is a constant value. It does not equal this constant value in this case. This could be a corruption of the system record pointer cluster or the data record itself. User Action: Rebuild the index if the system record pointer cluster is corrupted.
40.228 – BADHSHDAT
error fetching data record from hash bucket <num>:<num>:<num> Explanation: An error occurred during a fetch of a data record from a hash bucket. User Action: Ascertain if the index is corrupt by manually verifying related system records and hash buckets after dumping pages of the database. If the index is corrupt, rebuild it.
40.229 – BADIDXREL
Index <str> either points to a non-existent record or has multiple pointers to a record in table <str>. The logical dbkey in the index is <num>:<num>:<num>. Explanation: An index contains a dbkey for a record but the dbkey is not a valid one for the relation corresponding to the index or the index contains multiple instances of the dbkey. User Action: Recreate the index for the table.
40.230 – BADIDXVAL
Statistics API: The info request row index is not valid Explanation: A statistics API request has been made for a row that does not exist. User Action: Correct the row index.
40.231 – BADINCONSISPAG
inconsistent page is corrupt -- not found in Corrupt Page Table Explanation: An attempt was made to fetch an inconsistent page. Furthermore, the page is probably corrupt, because it is not logged in the Corrupt Page Table as an inconsistent page. This page cannot be accessed until it is consistent. User Action: Take the proper action to make the page consistent. For example, perform a RESTORE/RECOVER operation for a data or AIP page, or a REPAIR operation for a SPAM or ABM page.
40.232 – BADLAREA
could not ready logical area <num>, valid logical areas are between <num> and <num> Explanation: The page contains a reference to a logical area that does not exist. User Action: Correct the error with the RMU Restore command and verify the database again.
40.233 – BADLIBLOG
An invalid LIBRARIAN logical name has been specified. Explanation: Incorrect sytax was specified for defining a logical name with the LIBRARIAN qualifier. User Action: Repeat the command using the correct logical name syntax.
40.234 – BADMAXPNO
unable to read last page (<num>) in file <str> Explanation: The attempt to read the last page of the storage area failed. The highest initialized page value in the FILID could be corrupt. User Action: If the FILID is corrupt, restore the database and verify the database root again.
40.235 – BADMESSAGE
Network error: Bad message detected. Explanation: A bad message was detected; possibly a corrupted message. User Action: Contact your Oracle support representative for assistance.
40.236 – BADMETDAT
Conversion not possible because of nonconforming metadata Explanation: The metadata being converted does not conform to the expectations for the version being converted. You can receive this error message as a result of a data corruption or prior unsupported metadata changes. User Action: Use EXPORT - IMPORT to convert this database.
40.237 – BADMODE
<str> is an illegal transaction type Explanation: A legal transaction type is either EXCLUSIVE, PROTECTED, or SHARED. It can be abbreviated down to as little as one character. User Action: Use only EXCLUSIVE, PROTECTED, or SHARED for transaction type mode.
40.238 – BADNODEID
index id invalid for b-tree node <num>:<num>:<num> expected: <num> (hex), found <num> (hex) Explanation: The B-tree node at the given logical dbkey has a different index id than expected. The index id for a nonpartioned index is the same as the logical area id of the index. The index id for a partioned index is zero. User Action: Rebuild the index and verify the database again.
40.239 – BADNODLEV
B-tree node at logical dbkey <num>:<num>:<num> not at correct level. Expected level <num>, found <num>. Explanation: The "next b-tree node" of a b-tree node is not at the correct level in the node's b-tree structure. Both nodes should be at the same level. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.240 – BADNUMVAL
<str> is an illegal numeric value for <str> Explanation: A non-numeric character was encountered for an qualifier that takes a numeric value. User Action: Use only numeric characters for this qualifier's value.
40.241 – BADNXTDBK
Bad logical area DBID in next b-tree node logical dbkey <num>:<num>:<num>. Expected <num>, found <num> (hex). Explanation: An invalid logical area identifier was found in a b-tree node's "next b-tree node" dbkey. The logical area identifier of a b-tree node dbkey and its "next b-tree node" dbkey should be equivalent (that is, the two nodes should be from the same logical area). User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.242 – BADNXTNOD
Bad next b-tree node at level <num>. Expected b-tree node at logical dbkey <num>:<num>:<num>. Found next b-tree node at logical dbkey <num>:<num>:<num>. Explanation: This message tells the user an index verification error was found when verifying the "next b-tree node" of a b-tree node. The "next b-tree node" of a b-tree node B at level L is the next level L node on the right of b-tree node B. User Action: Rebuild the corrupted index.
40.243 – BADPAGFET
error fetching page <num> in area <str> Explanation: An error occurred during a fetch of a page from disk. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.244 – BADPAGNUM
page <num> is out of valid range (1:<num>) for physical area <num> Explanation: The page number requested does not fall within the range of pages that exist in the specified physical storage area. Note that a page number of 4294967295 is equal to -1. User Action: Contact your Oracle support representative for assistance.
40.245 – BADPAGRAN
area <str> page number <num> out of range expected between 1 and <num> Explanation: An attempt was made to fetch the given page, but the fetch was not performed, because the page number was out of range for the area. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.246 – BADPAGRED
read requesting physical page <num>:<num> returned page <num>:<num> Explanation: The area or page numbers stored on the database page do not match the area or page numbers of the DBKEY requested to be read from the database. This usually is caused by a hardware problem. User Action:
40.247 – BADPAGSIZ
page size (<num>) conflicts with existing areas (<num>..<num>) Explanation: An attempt was made to define a new storage area with a page size that conflicts with other areas. User Action: Define the area with a page size that is within the range specified.
40.248 – BADPAGSPM
Pages-Per-SPAM value of <num> is invalid for area <str> (FILID entry <num>) Explanation: The FILID entry contains an invalid Pages-Per-SPAM value. User Action: Restore and recover the database from backup.
40.249 – BADPARAM
<str> (<num>) is out of valid range (<num>..<num>) Explanation: An illegal parameter was specified during creation or modification of the database. User Action: Examine your command line for illegal parameter values.
40.250 – BADPCLCNT
Bad pointer cluster count in storage record header. Expected <num>, found <num>. Explanation: The actual pointer cluster count field (found part) in a data record's KODA header portion does not match the pointer cluster count expected for the data record. See accompanying messages for more information. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.251 – BADPROID
<str> file contains a bad identifier Expected "<str>", found "<str>" Explanation: While validating the prologue page of an area, it was discovered that it contained a bad identifier. User Action: This problem may be corrected, using RMU RESTORE or SQL IMPORT. If journaling is enabled, you may restore just the affected area and recover it.
40.252 – BADPSGCNT
Bad number of pointer segments in segmented string. Expected: <num> found: <num>. Explanation: The actual number of pointer segments (found part) of a segmented string is greater than the number of pointer segments stored in the header (expected part) of the segmented string. The segmented string is probably corrupt. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.253 – BADPSGREC
Tried to fetch data segments past the end of a segmented string pointer segment. Explanation: During fetches of data segments from a pointer segment, the end of the pointer segment storage record was unexpectedly encountered. The segmented string is probably corrupt. See accompanying messages for the segmented string context. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.254 – BADPTLABM
ABM flag set in page tail of data page <num> expected: 0, found: 1 Explanation: The Area Bit Map flag in a data Page Ptail should not be set. This flag is reserved for the Area Bit Map pages and should never be set for a data page. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.255 – BADPTLARE
invalid larea for uniform data page <num> in storage area <num> SPAM larea_dbid: <num>, page larea_dbid: <num> Explanation: The logical area database id on the page does not match the logical area in the SPAM page for a uniform format data page. This could be because of a corrupt page tail or a corrupt SPAM page. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.256 – BADRANGE
<str> out of range [<num>:<num>] Explanation: The limits are specified as [1:<num>] for area number and page number, and as [0:<num>] for line number, and page offset. User Action: Correct the error and try again.
40.257 – BADRDYMODE
<str> <str> is an illegal user mode for <str> Explanation: Indicated user mode is not allowed for the specified operation. User Action: Use a legal user mode for the specified operation.
40.258 – BADREADY
error readying storage area <str> Explanation: The storage area could not be readied in the requested mode perhaps because of corruption of AIP pages or because of a legitimate lock conflict with another user. User Action: Check for possible conflicting users of the same area. Try verification in a read-only ready mode.
40.259 – BADREFDBK
Invalid reference pointer <num>:<num>:<num> for duplicate B-tree node <num>:<num>:<num> Explanation: The reference pointer dbkey for a duplicate B-tree node is corrupted. The reference pointer dbkey must be greater than the last duplicate record dbkey. User Action: Ascertain if the index is corrupt by manually verifying the owner node after dumping that page. Rebuild the index if it is corrupt.
40.260 – BADREQCLASS
Statistics API: Unknown request class. Explanation: An unrecognized request class was passed to the statistics API. User Action: Correct the request class value.
40.261 – BADROODBK
error getting index "<str>" root dbkey Explanation: It was not possible to get the index root dbkey from the system relation. Indexes cannot be verified if requested. User Action: Rebuild the indexes.
40.262 – BADROOTMATCH
root file "<str>" no longer has its original name "<str>" Explanation: The current root file name does not match the name used when the root file was created. This could happen if you copied or renamed the root file, or if the file was created using a concealed logical device name and that logical name is no longer defined. User Action: Rename or copy the root file back to its original name or location, or redefine the necessary concealed logical device name in the system logical name table.
40.263 – BADRQSTBUF
Statistics API: Bad request buffer. Explanation: A statistics API request buffer is not properly formatted. User Action: Correct the API request buffer.
40.264 – BADRTIMG
Failed to activate external routine <str>. Image name is <str>. Entry point is <str>. <str>. Explanation: There was an error in attempting to activate an image that contains an external routine. The last line displayed is the error returned. User Action: Check that the image is the correct one for the external routine, that the image is in the correct location, that all appropriate logical names have been defined, and that the image can be dynamically activated as defined given the current privileges.
40.265 – BADRUJVER
run-unit journal version is incompatible with the runtime system Explanation: Your run-unit journal file was created with an incompatible version of the software. User Action: Your run-unit journal file cannot be used with the version of the software you have installed on your machine. Make sure you are using the correct RUJ journal, or if "multi-version" software is installed, make sure you are using the correct software version.
40.266 – BADSCOPE
singular cluster item applied to entire cluster Explanation: A DISPLAY command involving CLUSTER * is followed by singular context; for example, NEXT. Most often this happens because of the context assumed from previous commands. The same portion of all clusters in a storage record cannot be displayed with one command. User Action: You must use multiple commands to display the same portion of all clusters in a storage record.
40.267 – BADSEGCNT
Bad number of data segments in segmented string. Expected: <num> found: <num>. Explanation: The actual number of data segments (found part) of a segmented string is greater than the number of data segments stored in the header (expected part) of the segmented string. See accompanying messages for the segmented string context. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.268 – BADSEGLEN
Bad segmented string length. Expected: <num><num> found: <num><num> (hex). Explanation: The actual length (found part) of a segmented string is greater than the length stored in the header (expected part) of the segmented string. See accompanying messages for the segmented string context. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.269 – BADSEGLNG
Bad longest data segment length of segmented string. Expected: <num> found: <num>. Explanation: The actual longest data segment length (found part) of a segmented string is greater than the longest data segment length stored in the header (expected part) of the segmented string. See accompanying messages for the segmented string context. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.270 – BADSEGTPE
area <str>, page <num>, line <num> Storage segment of <num> found in storage segment header Expected segmented string type of <num>, found <num> Explanation: A storage segment was found that specified a segmented string of the type identified by the number in the first message. The number stored inside the segmented string (that is also used to identify the type of segmented string) must agree with the storage type found in the header of the storage segment. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Then verify the database again.
40.271 – BADSEGTYP
Storage segment of <num> found in storage segment header. Expected segmented string type of <num>, found <num>. Explanation: A storage segment was found that specified a segmented string of the type identified by the number in the first message. The number stored inside the segmented string (that is also used to identify the type of segmented string) must agree with the storage type found in the header of the storage segment. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Then verify the database again.
40.272 – BADSPAMINT
spam interval (<num>) is too large for page size (<num> block(s)) Explanation: The SPAM interval is too large for the specified page size. User Action: Reduce the SPAM interval or increase the page size.
40.273 – BADSPMFET
error fetching SPAM page <num> in area <str> Explanation: An error occurred during a fetch of a SPAM page from disk. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.274 – BADSPMPAG
errors in SPAM pages Explanation: Some errors occurred during verification of SPAM pages. User Action: Verify all the pages in spam range for the corrupt spam page.
40.275 – BADSTAREA
invalid storage area DBID <num>, valid storage areas are between <num> and <num> Explanation: Storage area database ids are expected to be less than or equal to the number of areas in the database. This storage area database id is corrupt. User Action: Correct the error with the RMU Restore command and verify the database again.
40.276 – BADSTATVER
statistics input file version is incompatible with the software version Explanation: The binary statistics file specified by the /INPUT qualifier was created with an incompatible version of the software. User Action: The binary statistics file cannot be used with the version of the software you have installed on your machine.
40.277 – BADSTOVER
Dbkey <num>:<num>:<num> contains an invalid storage version. Expected non-zero value no larger than <num> but found <num>. Explanation: An invalid version number was found for a vertical partition of a row. The dbkey reported is the dbkey of the partition with the invalid version. Currently, Oracle Rdb supports only one version of a vertical partition; you can not change the definition of a partition. User Action: Restore and recover the database page containing the corrupt record.
40.278 – BADSTPLEN
Bad pointer portion length in storage record header. Expected <num>, found <num>. Explanation: The actual pointer portion length field (found part) in a data record's KODA header portion does not match the pointer portion length expected for the data record. See accompanying messages for more information. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.279 – BADSTRTYP
Bad storage type dbid in storage record header. Expected <num>, found <num>. Explanation: The actual storage type database id (found part) in a data record's KODA header portion does not match the database id expected for the data record. See accompanying messages for more information. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.280 – BADSYSRDY
error readying larea <num> for system record of index <str> Explanation: The logical area corresponding to the system record for the hash index could not be readied. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. Rebuild the index if it is corrupt.
40.281 – BADSYSREC
Area <str>, page <num>, line <num> system record DBID differs from expected DBID. Expected: <num> (hex), found: <num> (hex). Explanation: A system record which was needed to locate a hash bucket was expected at the given dbkey but was not found. The page from which the record was fetched and the corresponding storage area are probably corrupt. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.282 – BADTADAIJ
after-image journal creation version differs from the root expected: <time>, found: <time> Explanation: The database creation time stored in the root differs from the time recorded in your after-image journal file. User Action: Your after-image journal file cannot be used to roll forward your database. Backup your database and create a new after-image journal file.
40.283 – BADTHSPCT
Space management thresholds values of <num>, <num>, <num> are invalid for area <str> (FILID entry <num>) Explanation: The FILID entry contains invalid space management thresholds values. User Action: Restore and recover the database from backup.
40.284 – BADVALUE
invalid value "<str>" Explanation: A parameter was given an illegal value. User Action: Correct error and try again.
40.285 – BADVERAIJ
after-image journal version is incompatible with the DBCS expected: <num><num>, found: <num><num> Explanation: Your after-image journal file was created with an incompatible version of the software. User Action: Your after-image journal file cannot be used with the version of the software you have installed on your machine.
40.286 – BADVRP2RV
Primary vertical partition has mismatched record versions <num> and <num>. Explanation: The primary partition of a vertically partitioned record has two copies of the record version number for the record. This message is displayed when the two numbers do not match. User Action: Restore and recover the page containing the bad record.
40.287 – BADVRPNDX
Vertical partition ID for dbkey <num>:<num>:<num> is stored as <num>. The valid vertical partition ID for this logical area is <num>. Explanation: Each logical area containing a vertical partition of a record can contain information for only one of the vertical partitions. This message indicates that a specified dbkey is associated with the incorrect partition number. User Action: Restore and recover the page from backup.
40.288 – BADVRPPTR
Vertical partition reference <num> points to an invalid dbkey <num>:<num>:<num> Explanation: The pointer for one of the vertical record partition segments within a row did not contain a valid dbkey. User Action: Verify the page in question, and see if the database needs to be restored.
40.289 – BADVRPREC
Dbkey <num>:<num>:<num> is vertically partitioned. Explanation: A record was found that should not have been vertically partitioned but was. User Action: Verify the page in question, and see if the database needs to be restored.
40.290 – BADVRPREF
Vertical partition <num> at dbkey <num>:<num>:<num> does not point back to the correct primary segment invalid reference was <num>:<num>:<num> Explanation: The pointer to the primary segment from one of the secondary segments within a vertically partitioned record is incorrect. User Action: Verify the page in question, and see if the database needs to be restored.
40.291 – BADVRPSEC
The primary partition at dbkey <num>:<num>:<num> segment <num> points to another primary partition at dbkey <num>:<num>:<num>. Explanation: A primary vertical record partition segment should only point to secondary segments, never to another primary segment. The specified row points to another primary segment. User Action: Verify the page in question, and see if the database needs to be restored.
40.292 – BATCONFIRM
confirmation not allowed in batch Explanation: Confirmation is not permitted in batch mode. User Action: Try again without CONFIRM, or from interactive mode.
40.293 – BCKCMDUSED
The backup command was"<str>". Explanation: Make sure that all the backup set files specified by the backup command have been specified by the restore command User Action: Repeat the restore, specifying all the backup set files in the correct order.
40.294 – BDAREAOPN
unable to open file <str> for storage area <str> Explanation: The storage area file corresponding to the storage area could not be opened successfully. User Action: Check the error message to make sure the filename is correct. If the filename is not correct, the FILID is corrupt. In this case, restore the database, and verify the database root again.
40.295 – BDCLMPPGCNT
The specified BLOCKS_PER_PAGE value would cause an illegal clump page count for storage area <str> Explanation: The SPAM clump page count multiplied by the specified BLOCKS_PER_PAGE value would be greater than the maximum of 64 blocks. For uniform storage areas, if a new blocks_per_page value is specified RMU/RESTORE cannot change the clump page count of the backup up database. User Action: Repeat the restore with a new BLOCKS_PER_PAGE value or without changing the BLOCKS_PER_PAGE value.
40.296 – BDDATRANG
day, month, or year field in date string out of range Explanation: The month field must be between 1 and 12 inclusive. The day field must be between 1 and 31 inclusive. The year field must be between 0 and 3000 inclusive. User Action: Re-enter the DATE data item with the error corrected.
40.297 – BDLAREADY
error readying logical area with dbid <num> Explanation: An error occurred when an attempt was made to ready the given logical area. This could be because of a corruption in the RDB$SYSTEM area. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. If there is a corruption that causes this error, correct the error with the RMU Restore command, and verify the database again.
40.298 – BDLAREAID
area inventory page <num> contains a reference to a logical area it does not belong expected: <num>, found: <num> Explanation: The page contains an entry that is not in use, but is not empty. User Action: Correct the error with the RMU Restore command and verify the database again.
40.299 – BDLIVDBD
The live area for snapshot area <str> is not valid. Expected a number between 1 and <num>. Found <num>. Explanation: The FILID entry for each snapshot area contains the FILID entry number for its live area. The area number stored for this FILID entry is not in the valid range of FILID numbers for this database. User Action: Restore and recover the database from backup.
40.300 – BDLIVDBID
invalid DBID for single-file database area <str> expected: <num>, found: <num> Explanation: The FILID entry contains a database id other than 1 for a live area of a single-file database. User Action: Correct the error with the RMU Restore command and verify the database again.
40.301 – BDSGLAREA
Bad logical area DBID in segmented string segment logical dbkey <num>:<num>:<num>. Explanation: An invalid logical area identifier was found in a segmented string data segment dbkey. See accompanying messages for the segmented string context. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.302 – BDSNAPOPN
unable to open file <str> for snapshot area <str> Explanation: The storage area file corresponding to the snapshot area could not be opened successfully. User Action: Check the error message to make sure the filename is correct. If the filename is not correct, the FILID is corrupt. In this case, restore the database, and verify the database root again.
40.303 – BDSNPDBD
The snapshot area for live area <str> is not valid. Expected a number between 1 and <num>. Found <num>. Explanation: The FILID entry for each live area contains the FILID entry number for its snapshot area. The area number stored for this FILID entry is not in the valid range of FILID numbers for this database. User Action: Restore and recover the database from backup.
40.304 – BDSNPDBID
the DBID value of this snap area is wrong expected: <num>, found: <num> Explanation: The FILID entry contains a database id other than 2 for a snapshot area of a single-file database. User Action: Correct the error with the RMU Restore command and verify the database again.
40.305 – BDSPAMRANGE
illegal space range for current space management page Explanation: The limits of the space range are not in ascending order, or the range goes past the end of the space information for the current space management page. User Action: Try the operation again, putting the range limits in ascending order or correcting the range so it does not reference a space entry outside of the space management page range.
40.306 – BKUPEMPTYAIJ
after-image journal file is empty Explanation: An attempt was made to back up an empty after-image journal file. User Action: Be sure the correct after-image journal file was specified.
40.307 – BLOCKCRC
software block CRC error Explanation: Media error detected on tape. User Action: None.
40.308 – BLOCKLOST
block of <str> lost due to unrecoverable error Explanation: Tape data recovery failed for this block of data. User Action: The restored database may be corrupt or incomplete. Validate and repair the database before using.
40.309 – BLRINV
internal error - BLR string <num> for <str>.<str> is invalid Explanation: An attempt to translate an internal metadata item failed. User Action: Contact your Oracle support representative for assistance.
40.310 – BOUND
multiple binds are not allowed Explanation: You are already bound to a database. You can only be bound to one database at a time for a given stream. User Action: You can execute an UNBIND statement and try the BIND again, or use the multiple stream feature to bind to a database on another stream.
40.311 – BREAK
internal system failure -- database session attach information not found Explanation: The database session information cannot be found; this may be indicative of a more serious problem. User Action: Contact your Oracle support representative for assistance.
40.312 – BTRBADDBK
bad b-tree owner dbkey <num>:<num>:<num> for index <str> Explanation: The owner dbkey of a b-tree node is invalid. Further verification of the b-tree is abandoned. User Action: Correct the error by recreating the related index, and verify the database again.
40.313 – BTRBADLFT
Leftmost edge of interior b-tree node at level <num> must have a NULL IKEY. IKEY of "<str>" found as first entry of dbkey <num>:<num>:<num>. Explanation: The left edge of a b-tree must have a null Ikey at each non-leaf level. The specified b-tree node was at the left edge of the tree and had a non-null Ikey as its first entry. This indicates that the index is corrupt. User Action: Drop and rebuild the index.
40.314 – BTRDUPCAR
Inconsistent duplicate cardinality (C1) of !@UQ specified for entry <num> at dbkey <num>:<num>:<num>. Actual count of duplicates is !@UQ. Explanation: The cardinality specified in an entry is inconsistent with the the cardinality computed from the duplicate list for the entry. This will cause wrong results for SQL queries like COUNT(*). Rebuild the index and verify the database again.
40.315 – BTRENTCAR
Inconsistent entry cardinality (C1) of !@UQ specified for entry <num> at dbkey <num>:<num>:<num> using precision of <num>. Dbkey <num>:<num>:<num> at level <num> specified a cardinality of !@UQ. Explanation: The cardinality specified in an entry is inconsistent with the the cardinality supplied by the targeted child in the b-tree.
40.316 – BTRENTLEN
B-tree node entry <num> has an invalid data length of <num>. Explanation: An entry in a B-tree node was identified as having an invalid data length. A data length is invalid if it is LEQ zero for a entry containing duplicates or NEQ zero for a non-duplicate entry. The data length is also invalid if it extends beyond the end of the b-tree node.
40.317 – BTRERPATH
parent B-tree node of <num>:<num>:<num> is at <num>:<num>:<num> Explanation: An error occurred at a lower level in the B-tree. The B-tree node at the given logical dbkey is in the path in the B-tree from the root to a node where an error occurred. User Action: Use this information to reconstruct the path from the root to the node where the problem happened and determine the location of the problem. Rebuild the index and verify the database again.
40.318 – BTRHEACAR
Sum of entry cardinalities given as !@UQ; expected !@UQ. Explanation: The cardinality in the ANHD header disagreed with the sum of the cardinalities found in the entries for the node.
40.319 – BTRINVARE
area <str>, page <num>, line <num> storage record <str>, b-tree node dbkey contains an invalid logical area id expected 1 through <num>, found: <num> Explanation: Each entry in a b-tree node contains a compressed key followed by a compressed dbkey. The logical area id of the dbkey of the current entry of the b-tree node is not within the range of logical area id numbers computed. The valid range is expected to be 1 through <num>. Further verification of that b-tree node is abandoned. This message is followed by one or two lines. The first line indicates the parent node of the node in error (unless the parent is the owner record occurrence). A second line indicates the owner record occurrence of the invalid b-tree. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.320 – BTRINVDUP
Non-zero counts in a duplicate node entry. Explanation: One of the PRE_LEN, SEP_LEN, C1 or C2 counts were non-zero in a duplicate node entry for a b-tree.
40.321 – BTRINVFLG
Invalid flags found for b-tree node entry <num>. Flags were %X'<num>'. Explanation: An entry in a B-tree node was identified as having invalid flag bits. This prevents verification of the remainder of the b-tree node.
40.322 – BTRINVPAG
area <str>, page <num>, line <num> b-tree node dbkey contains an invalid page number expected <num> through <num>, found: <num> Explanation: Each entry in a b-tree node contains a compressed key followed by a compressed dbkey. The page number of the dbkey of the current entry of the b-tree node is not within the range of page numbers for the storage area to which the dbkey points. The valid range is expected to be 1 through <num>. Further verification of the b-tree is abandoned. This message is followed by one or two lines. The first line indicates the parent node of the node in error (unless the parent is the owner record occurrence). A second line indicates the owner record occurrence of the invalid b-tree. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.323 – BTRLEACAR
Inconsistent leaf cardinality (C2) of !@UQ specified for entry <num> at dbkey <num>:<num>:<num> using precision of <num>. Dbkey <num>:<num>:<num> at level <num> specified a cardinality of !@UQ Explanation: The cardinality specified in an entry is inconsistent with the the cardinality supplied by the targeted child in the b-tree. User Action: Drop the index and rebuild it.
40.324 – BTRLENERR
area <str>, page <num>, line <num> b-tree node length error expected node length <num>, found: <num> Explanation: The length of each b-tree node is stored after the expanded dbkey of the owner record occurrence. The length in the b-tree node and the length computed by examining the entries within the b-tree node do not agree. Further verification of the b-tree is abandoned. This message is followed by one or two lines. The first line indicates the parent node of the node in error (unless the parent is the owner record occurrence). A second line indicates the owner record occurrence of the invalid b-tree. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.325 – BTRLEXERR
B-tree node Ikeys not in lexical order found key "<str>" in b-tree node at <num>:<num>:<num> followed by key "<str>" in b-tree node at <num>:<num>:<num> Explanation: Each entry in a b-tree node contains a compressed key followed by a compressed dbkey. The keys stored in a b-tree node must be in ascending lexical order. The keys stored in the current b-tree node are out of order. Further verification of the b-tree is abandoned. User Action: Drop and recreate the index, or correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.326 – BTRNODDBK
Dbkey of B-tree node is <num>:<num>:<num> Explanation: This message gives the dbkey of the B-tree index node so the integrity of the index can be verified manually, if necessary. User Action: Ascertain if index is corrupt by manually verifying related index nodes after dumping pages of the database. If the index is corrupt, rebuild the index.
40.327 – BTRNODMBZ
B-tree node at logical dbkey <num>:<num>:<num> contains a filler field that should be zero expected: 0, found: <num><num> (hex) Explanation: The b-tree node contains a filler field that is reserved for future expansion. It should contain all zeros but does not. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.328 – BTRPARROO
root dbkey of b-tree partition in <str> is <num>:<num>:<num> Explanation: An error occurred at a lower level in this partition of the b-tree. This error message gives the root dbkey of this partition of the b-tree index to help determine the point of corruption. User Action: Use this information to reconstruct the path from the root to the node where the problem occurred and determine the location of the problem. Rebuild the index and verify the database again.
40.329 – BTRPFXERR
area <str>, page <num>, line <num> storage record with prefix key "<str>" contains a bad prefix key len, expected: <num>, found: <num> Explanation: The length of the prefix in the separator of a btree node is corrupted. User Action: Rebuild the index and verify the database again.
40.330 – BTRPRECIS
Invalid precision of <num><num> (hex) specified for B-tree. Explanation: The precision specified in the root node of the B-tree is invalid. A valid precision has the high bit set set and the other bits must be greater than zero and less than or equal to 100.
40.331 – BTRROODBK
root dbkey of B-tree is <num>:<num>:<num> Explanation: An error occurred at a lower level in the B-tree. This error message gives the root dbkey of the B-tree index to help determine the point of corruption. User Action: Use this information to reconstruct the path from the root to the node where the problem occurred and determine the location of the problem. Rebuild the index and verify the database again.
40.332 – BTRSTSTYP
area <str>, page <num>, line <num> storage record contains a logical area that does not match logical area of owner occurrence in b-tree owner: <str>, storage record: <str> Explanation: Each node of a b-tree, except the terminal member records, contains the storage set id number of the indexed set type. This id number should be the same as the storage set id number of the pointer cluster in the owner record occurrence of the b-tree. The storage set id number stored in the current b-tree node is not correct. Further verification of the b-tree is abandoned. This message is followed by one or two lines. The first line indicates the parent node of the node in error (unless the parent is the owner record occurrence). A second line indicates the owner record occurrence of the invalid b-tree. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement, and verify the database again.
40.333 – BTRVFYPRU
B-tree verification pruned at this dbkey Explanation: An error occurred during a fetch of a B-tree node. Verification will not proceed beyond this node down the tree. However, verification of other nodes will continue. User Action: Verify the page in question, and check if the database needs to be restored.
40.334 – BUFFERSLOST
all buffers are lost Explanation: This is indicative of an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.335 – BUFSMLPAG
The specified BLOCKS_PER_PAGE <num> exceeds the buffer size <num> for storage area <str> Explanation: The storage area page size must not be greater than the database buffer size. User Action: Repeat the restore with a new BLOCKS_PER_PAGE value or without changing the BLOCKS_PER_PAGE value or with an increased buffer size.
40.336 – BUFTOOBIG
Network error: Network buffer too big. Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.337 – BUFTOOSML
buffer size (<num>) is smaller than largest page (<num>) Explanation: The buffer size must be large enough to accommodate the largest page size within the database. User Action: Specify a buffer size at least as large as the message indicates.
40.338 – BUFTRUNC
response buffer truncated. Explanation: The supplied response buffer is not large enough to accommodate the results of the statistics API query. The contents of the returned buffer are valid but incomplete. User Action: Correct the user program, and try the request again.
40.339 – BUGCHECK
fatal, unexpected error detected Explanation: A fatal, unexpected error was detected by the database management system. User Action: Contact your Oracle support representative for assistance.
40.340 – BUGCHKDMP
generating bugcheck dump file <str> Explanation: The database management system has detected a fatal, unexpected error, and is writing a bugcheck dump file with the specified file name. User Action: Please send this bugcheck dump file to your software specialist, along with any other related programs or data.
40.341 – BYPAGABM
"by page" RESTORE of an ABM (area bitmap page) page was attempted Explanation: ABM pages are not backed up, so they can not be restored. The affected page has been initialized, but the ABM chain must be rebuilt to complete the correction of the corrupt ABM page. User Action: Correct ABM pages by using the ABM qualifier of the RMU Repair command. You can do this now.
40.342 – BYPAGAIP
"by page" RESTORE of an AIP (area inventory page) was attempted Explanation: Corruptions in the AIP (area inventory) may not always be correctable using a "by page" RESTORE operation. User Action: Use the RMU Verify command to verify the database. If the "by page" restore operation was not effective, perform a "by area" restore of the entire RDB$SYSTEM storage area.
40.343 – CABORT
user entered Control-C to abort RMU CONVERT causing database corruption Explanation: You entered a CTRL/C to abort a convert operation. The database is now corrupt. User Action: Start with a backup of the database. Use the RMU Convert command again.
40.344 – CACHEINUSE
record cache <str> is still referenced by storage area <str> Explanation: Unable to delete record cache because it is still being referenced by one or more storage areas. User Action: Remove the record cache from the affected storage areas first.
40.345 – CANNOTCLSRCSGLX
RCS is active on this node and the database is also open on another node Explanation: The Record Cache Server (RCS) process is active on this node and another node has this database open. The database must be closed on all other nodes before it can be closed on this node.
40.346 – CANTADDLAREA
Cannot create logical area <str> Explanation: Too many logical areas have been created. Can't create a new one. User Action: Do not attempt to create so many logical areas. Redesign.
40.347 – CANTASSMBX
error assigning a channel to a mailbox Explanation: An error occurred when you attempted to assign a channel to a VMS mailbox. User Action: Examine the secondary message for more information.
40.348 – CANTBINDRT
error mapping database root file Explanation: An error occurred during mapping to the database root file. User Action: Examine the associated messages to determine the reason for failure.
40.349 – CANTCLOSEDB
database could not be closed as requested Explanation: The database monitor detected an error while attempting to close the database you specified. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.350 – CANTCREABS
error creating AIJ backup server process Explanation: An error occurred when you attempted to create a detached AIJ backup server process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.351 – CANTCREALS
error creating AIJ Log Server process Explanation: An error occurred when you attempted to create a detached AIJ Log Server process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.352 – CANTCREBOB
error creating Buffer Object Explanation: An error occurred when you attempted to create an OpenVMS buffer object. User Action: Examine the secondary message(s) for more information.
40.353 – CANTCREDBR
error creating database recovery process Explanation: An error occurred when you attempted to create a detached database recovery process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.354 – CANTCREEXEC
Cannot create executor process. Explanation: The root process could not create one or more executor processes. User Action: Look at the secondary message that describes the reason for the process creation failure for further information.
40.355 – CANTCREGBL
error creating and mapping database global section Explanation: An error occurred when you attempted to create a map to the database global section. User Action: Examine the secondary message(s) for more information.
40.356 – CANTCRELCS
error creating AIJ Log Catch-Up Server process Explanation: An error occurred when you attempted to create a detached AIJ Log Catch-Up Server process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.357 – CANTCRELRS
error creating AIJ Log Roll-Forward Server process Explanation: An error occurred when you attempted to create a detached AIJ Log Roll-Forward Server process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.358 – CANTCREMBX
cannot create mailbox Explanation: An error occurred when you attempted to create a mailbox. Mailboxes are used for interprocess communication by the database management system. User Action: Examine the associated messages to determine the reason for failure. Usually, you will have to change one of your quotas (most likely, the buffered I/O-byte count quota or the open-file quota).
40.359 – CANTCREMON
unable to start database monitor process Explanation: An error occurred when you attempted to start the database monitor process. This is a detached process. User Action: Examine the secondary message(s) to determine the reason for the failure.
40.360 – CANTCRERCS
error creating Record Cache Server process Explanation: An error occurred when you attempted to create a detached Record Cache Server process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.361 – CANTCREVLM
error creating or mapping Very Large Memory region Explanation: An error occurred when you attempted to create or map a database Very Large Memory (VLM) region. User Action: Examine the secondary message(s) for more information.
40.362 – CANTCVRT
cannot convert this database version Explanation: Your database is not recognized as one which can be converted. User Action: Either the database being converted has already been converted, is too old to be converted, or is not a database. Check your backups.
40.363 – CANTCVRTCPT
Cannot convert a database with CPT entries. Explanation: Your database has entries in the corrupt page table and cannot be converted. User Action: Restore and recover the corrupt pages, back up the database, and then convert it.
40.364 – CANTDELETE
error deleting "<str>" Explanation: An error occurred when you attempted to delete the indicated file. You must be able to change the protection on a file in order to delete it. User Action: Examine the associated messages to determine the reason for failure.
40.365 – CANTFINDAIJ
cannot locate standby AIJ journal to match master database Explanation: A master database AIJ journal cannot be located on the standby database. User Action: You may select an AIJ journal using either the AIJ name or the default or current AIJ file specification. The list of valid AIJ journals can be obtained by dumping the database header information.
40.366 – CANTFINDLAREA
cannot locate logical area <num> in area inventory page list Explanation: This is an internal error. A request was made to find logical area information for the specified logical area number but no active AIP entries could be found for that logical area number. User Action: Contact your Oracle support representative for assistance.
40.367 – CANTLCKTRM
database monitor error establishing termination lock Explanation: The database monitor was unable to assert a request on the user's image termination lock. The user's image might already have terminated before the monitor received the request. User Action: Examine the secondary message(s) for more information.
40.368 – CANTMAPSHMEM
error mapping to shared memory "<str>" Explanation: An error occurred while mapping to a database shared memory section. User Action: Examine the associated messages to determine the reason for failure.
40.369 – CANTOCDB
Error encountered while opening or closing database file <str> Explanation: An error occured while trying to openor close the specified database file. User Action: See previous error message in the output to determine what corrective action to take.
40.370 – CANTOPENDB
database could not be opened as requested Explanation: The database monitor detected an error while attempting to open the database you specified. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.371 – CANTOPENIN
error opening input file <str> Explanation: An error occurred during opening of the input file. User Action: Examine the associated messages to determine the reason for failure.
40.372 – CANTOPNALSOUT
error opening AIJ Log Server output file Explanation: An error occurred during opening of the AIJ Log Server output file. User Action: Examine the secondary message for more information.
40.373 – CANTOPNLCSOUT
error opening AIJ Log Catch-Up Server output file Explanation: An error occurred during opening of the AIJ Log Catch-Up Server output file. User Action: Examine the secondary message for more information.
40.374 – CANTOPNLRSOUT
error opening AIJ Log Roll-Forward Server output file Explanation: An error occurred during opening of the AIJ Log Roll-Forward Server output file. User Action: Examine the secondary message for more information.
40.375 – CANTOPNLSSOUT
error opening AIJ log server output file Explanation: An error occurred during opening of log server output file. User Action: Examine the secondary message for more information.
40.376 – CANTOPNROO
cannot open root file "<str>" Explanation: The named root file could not be opened. User Action: Examine the secondary message or messages. Correct the error and try again.
40.377 – CANTQIOMBX
unable to send mail to a mailbox Explanation: An error occurred when you attempted to send mail to a mailbox. User Action: Examine the secondary message(s) to determine the reason for the failure.
40.378 – CANTREADDB
error opening or reading database file Explanation: An error occurred when you attempted to open or read from the database file. User Action: Examine the secondary message(s) for more information.
40.379 – CANTREADDBS
error reading pages <num>:<num>-<num> Explanation: An error occurred when you attempted to read one or more database pages. The message indicates the storage-area ID number and the page numbers of the first and last pages being read. User Action: Examine the associated messages to determine the reason for failure.
40.380 – CANTRESUMEABS
error resuming AIJ backup operations Explanation: An error occurred when you attempted to resume after-image journal backup operations. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.381 – CANTRESUMELRS
error resuming AIJ Log Roll-Forward Server process Explanation: An error occurred when you attempted to resume the detached AIJ Log Roll-Forward Server process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.382 – CANTSNAP
can't ready storage area <str> for snapshots Explanation: Snapshots were last enabled for this area by a transaction that had not committed before the snapshot started. Information to materialize the snapshot is not present. User Action: Restart the snapshot transaction. If failure of a snapshot transaction is critical, you should ready all areas before doing any retrievals.
40.383 – CANTSPAWN
error spawning sub-process Explanation: An error occurred when you attempted to spawn a sub-process. User Action: Examine the secondary message for more information.
40.384 – CANTSTARTABS
error starting AIJ backup server process Explanation: An error occurred when you attempted to start a detached AIJ backup server process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.385 – CANTSTARTALS
error starting AIJ Log Server process Explanation: An error occurred when you attempted to start a detached AIJ Log Server process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.386 – CANTSTARTLCS
error starting AIJ Log Catch-Up Server process Explanation: An error occurred when you attempted to start the detached AIJ Log Catch-Up Server process on the replicated database. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.387 – CANTSTARTLRS
error starting AIJ Log Roll-Forward Server process Explanation: An error occurred when you attempted to start the detached AIJ Log Roll-Forward Server process on the replicated database. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.388 – CANTSTARTLSS
error starting "Hot Standby" Server process Explanation: An error occurred while attempting to start the detached "Hot Standby" Server process on the replicated database. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.389 – CANTSTARTRCS
error starting Record Cache Server process Explanation: An error occurred while attempting to start the detached Record Cache Server process on the indicated database. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.390 – CANTSTARTTX
cannot start transaction Explanation: Cannot start a transaction as requested. User Action: Examine the secondary message for more information.
40.391 – CANTSTOPALS
error stopping AIJ Log Server process Explanation: An error occurred when you attempted to stop a detached AIJ log-server process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.392 – CANTSTOPLSS
error stopping "Hot Standby" Server process Explanation: An error occurred when you attempted to stop the detached "Hot Standby" Server process(es). User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.393 – CANTSTOPRCS
error stopping Record Cache Server process Explanation: An error occurred when you attempted to stop a detached Record Cache Server process. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.394 – CANTSUSPENDABS
error suspending AIJ backup operations Explanation: An error occurred when you attempted to suspend after-image journal backup operations. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.395 – CANTSUSPENDLRS
error suspending AIJ Log Roll-Forward Server process Explanation: An error occurred when you attempted to suspend the detached AIJ Log Roll-Forward Server process on the replicated database. User Action: Examine the secondary message(s) or look in the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.396 – CANTWRITEDBS
error writing pages <num>:<num>-<num> Explanation: An error occurred when you attempted to write one or more database pages. The message indicates the storage-area ID number and the page numbers of the first and last pages being written. User Action: Examine the associated messages to determine the reason for failure.
40.397 – CAPTIVEACCT
captive account -- no DCL commands can be issued Explanation: An attempt was made to issue a DCL command from a captive account. User Action: Do not issue DCL commands from captive accounts or modify the account flags so that it's possible to spawn DCL commands.
40.398 – CARDREQFULL
/FULL can only be used with CARDINALITY statistics Explanation: The /FULL qualifier can only be used if the RMU/SHOW OPTIMIZER command is displaying TABLE, INDEX or INDEX PREFIX CARDINALITY statistics. User Action: Specify the display of CARDINALITY statistics with the /FULL qualifier.
40.399 – CCHDEVDIR
Cache directory "<str>" does not include a device/directory Explanation: The specified record cache directory does not include a device and directory. User Action: Include a device and directory specification.
40.400 – CDDACCESS
Could not access the CDD/Plus repository. Explanation: An error was detected while attempting to access the CDD/Plus repository. User Action: The secondary error (CDD/Plus error) indicates the appropriate action.
40.401 – CDDDEFERR
The repository record or field definition retrieved is not compatible. Explanation: The record or field definition retrieved from the repository uses attributes or attribute values not supported by the load function. User Action: Create a new compatible definition.
40.402 – CDDNOTFND
The repository pathname did not specify an entity in the repository. Explanation: The entity specified by the repository pathname was not found in the repository. User Action: Correct the specified pathname.
40.403 – CDDNOTUNQ
The repository pathname did not specify a unique entity in the repository. Explanation: More than one entity was found for the specified repository pathname. User Action: Correct the specified pathname, and check it for wildcards.
40.404 – CGREXISTS
Workload column group already exists for table <str>. Explanation: The specified column group already exists for the specified table. User Action: None.
40.405 – CGRNOTFND
Workload column group for table <str> not found. Explanation: The specified column group does not exist for the specified table. User Action: Correct the column group and try again.
40.406 – CGRSNOTFND
Workload column groups for table <str> not found. Explanation: The specified column groups do not exist for the specified table. User Action: Correct the column groups and try again.
40.407 – CHECKSUM
checksum error - computed <num>, page contained <num> Explanation: The computed checksum for the database page disagreed with the checksum actually stored on the page. This usually is caused by a hardware problem. User Action: None.
40.408 – CHKNOTENA
Transaction checkpointing is not enabled for this database Explanation: This request was ignored because checkpointing is not enabled. User Action: No action is required.
40.409 – CHKPOWUTL
Make sure that the Power Utilities option has been properly installed on your system Explanation: An attempt to execute a parallel load or backup operation was attempted but the power utilities have not been installed on the system. User Action: Install the power utilities and retry the operation.
40.410 – CHMODERR
Error on call to chmod. Explanation: An error was encountered on the Digital UNIX chmod system service call. User Action: Contact your Oracle support representative for assistance.
40.411 – CLMCIRCAIJ
Continuous LogMiner requires fixed-size circular after-image journals Explanation: The Continuous LogMiner feature requires that fixed-size circular after-image journals are used. User Action: If Continuous LogMiner features are required the database must be re-configured to enable fixed-size circular after-image journals.
40.412 – CLMNOENABLED
Continuous LogMiner has not yet been enabled Explanation: The Continuous LogMiner feature has not been enabled on this database. User Action: If Continuous LogMiner features are required, The Continuous LogMiner should be enabled.
40.413 – CLMPPGCNT
the clump page count multiplied by the number of blocks per page is greater than the maximum of 64 blocks Computed: <num>. CLUMP_PAGCNT = <num>; PAG_BLKCNT = <num> Explanation: The clump page count multiplied by the number of blocks per page is greater than the maximum of 64 blocks. Verification of the FILID continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.414 – CLOSERR
Network error: Error closing file. Explanation: An error was encountered on the Digital UNIX bind system service call. User Action: Contact your Oracle support representative for assistance.
40.415 – CMDTOOLONG
Command in options file exceeds 1024 characters in length Explanation: a command line from the options file exceeds the maximum length. User Action: Edit the options file to fix the problem.
40.416 – CNSTVERER
verification of constraints is incomplete due to errors. Explanation: An error occurred during the verification of constraints which prevented further verification of constraints. Other requested verifications were performed. Messages describing the error that prevented constraint verification are displayed following this message. User Action: If the messages following this message are LOCK_CONFLICT followed by CANTSNAP, then there was a locking conflict between the transaction verifying the constraint (which is a read-only transaction) and another transaction which has an area locked in exclusive mode. Retry the constraint verification when no exclusive transactions are running.
40.417 – CNVESTCRD
Prefix cardinality has been estimated for database <str>.RMU/COLLECT OPTIMIZER_STATISTICS should be run to get actual values. Explanation: The Rdb query optimzer uses prefix cardinality statistics. Running RMU/COLLECT OPTIMIZER_STATISTICS will replace estimated or out of data cardinality values with actual cardinality values. User Action: Run RMU/COLLECT OPTIMIZER_STATISTICS as soon as posssible.
40.418 – CNVNUMDAT
cannot convert number to a date string Explanation: A quadword DATE data type is not in the correct form to be converted to a text string. User Action: Re-enter the DATE data item in the correct format.
40.419 – COLNOTINSRT
Failed to insert workload column group for table <str>. Explanation: The specified column group could not be inserted for the specified table. User Action: Make sure the database is accessible. Contact your Oracle support representative for assistance if this problem persists.
40.420 – COLNOTINTAB
Column <str> of table <str> not found. Explanation: The specified column does not exist in the specified table. User Action: Correct the column name and try again.
40.421 – COLTXT_10
Workload column group for <str> is not found. Explanation: You specified a non-existent column group on a command line. User Action: Check the spelling of the names and the order of the column group.
40.422 – COLTXT_13
Failed to insert workload column group for <str> Explanation: An attempt to insert a column group for a table failed User Action: None
40.423 – COLTXT_50
Changing <str> area to READ_WRITE.
40.424 – COLTXT_51
Changing <str> area to READ_ONLY.
40.425 – COLTXT_52
Prefix cardinality
40.426 – COLTXT_53
***Prefix cardinality collection is disabled***
40.427 – COLTXT_54
Segment Column : <str>
40.428 – COLTXT_55
Table cardinality
40.429 – COLTXT_56
Index cardinality
40.430 – COLTXT_57
Actual Stored Diff Percent
40.431 – COLTXT_58
Actual Stored Diff Percent Thresh
40.432 – COLTXT_59
(Cardinality: Diff=Stored-Actual, Percent=Diff/Actual)
40.433 – COLTXT_60
(Cardinality: Diff=Stored-Actual, Percent=Diff/Actual, Thresh=Percent exceeded)
40.434 – COMPLEX
data conversion error on complex data type Explanation: There would have been loss of information on a complex data type conversion. The operation was not performed. User Action: Enter another value.
40.435 – COMROLLAHD
currently modified area header fields must be committed or rolled back Explanation: You attempted to issue an EXIT command without either committing or rolling back currently modified header fields for area files. User Action: Issue a COMMIT command to write current modifications back to the database or a ROLLBACK command to ignore all current modifications. Then issue an EXIT command.
40.436 – COMROLLAREA
currently modified area file fields must be committed or rolled back Explanation: You attempted to issue an EXIT command without either committing or rolling back currently modified fields for area files. User Action: Issue a COMMIT command to write current modifications back to the database or a ROLLBACK command to ignore all current modifications. Then issue an EXIT command.
40.437 – COMROLLPAG
currently modified pages must be committed or rolled back Explanation: You attempted to issue an EXIT command without either committing or rolling back currently modified database pages. User Action: Issue a COMMIT command to write current modifications back to the database or a ROLLBACK command to ignore all current modifications. Then issue an EXIT command.
40.438 – COMROLLROO
currently modified ROOT fields must be committed or rolled back Explanation: You attempted to issue an EXIT command without either committing or rolling back currently modified fields in the root file. User Action: Issue a COMMIT command to write current modifications back to the database or a ROLLBACK command to ignore all current modifications. Then issue an EXIT command.
40.439 – COMROOTCOM
COMMIT or ROLLBACK DEPOSIT ROOT UNIQUE_IDENTIFIER command to use this command Explanation: A DEPOSIT AREA_HEADER UNIQUE_IDENTIFIER command is not alowed if a DEPOSIT ROOT UNIQUE_IDENTIFIER command is currently pending for this RMU/ALTER session. The DEPOSIT ROOT UNIQUE_IDENTIFIER command will write the root unique identifier to ALL storage area header blocks to insure database integrity at COMMIT time, so the DEPOSIT AREA_HEADER UNIQUE_IDENTIFIER command is redundant. User Action: The user will have to COMMIT or ROLLBACK the current session before he can execute the DEPOSIT AREA_HEADER UNIQUE_IDENTIFIER command.
40.440 – CONFLSWIT
conflicting qualifiers <str> and <str> Explanation: The indicated qualifiers can not be used together. User Action: Read the HELP file or documentation to see which qualifier you need.
40.441 – CONFOPT
conflicting options - all options have been disabled Explanation: The command option negated all options. User Action: Correct the RMU command.
40.442 – CONFTXNOPTION
Do you really want to <str> this transaction? [<char>]: Explanation: Confirm that the user actually wants to take the action.
40.443 – CONNAMUSE
Conflicting name usage - Record and Field have the same name Explanation: The RECORD name derived from the RELATION name is identical to the name of a FIELD of that record. User Action: If the .rrd file is to be imported into the repository, it must first be edited to resolve the name conflict. If you do not plan to import into the repository, no action is required.
40.444 – CONNECTERR
Error connecting to server. To correct the most common causes of this problem, check that SQL/Services is running on the server node and that your network is configured for the chosen transport. Explanation: An error was encountered on the connect system service call. User Action: To correct the most common causes of this problem, check that SQL/Services is running on the server node and that your network is configured for the chosen transport.
40.445 – CONNTIMEOUT
Network error: Timeout connecting to server. Explanation: The server created a connect context for a client, but it did not complete connect processing within the server specified interval. User Action: Determine that SQL/Services is running on the server system and then retry connecting to the server.
40.446 – CONSISTENT
area <str> is already consistent Explanation: You issued an RMU command with the intent of making this area consistent. But it already is consistent. User Action: None.
40.447 – CONSTFAIL
Verification of constraint "<str>" has failed. Explanation: The specified constraint failed a verification check. One or more rows in the database violate this constraint. A constraint violation may occur when using the RMU Load command with the Noconstraint qualifier or with the Constraint=Deferred qualifier. User Action: Determine the offending rows and correct them to remove the constraint violation.
40.448 – CONSTTRUN
Too many errors: List of constraint violations has been truncated. Explanation: Too many invalid constraints were identified. Over 100 invalid constraints were identified before the operation was terminated. User Action: Fix the displayed constraints and reverify the database to find additional invalid constraints.
40.449 – CONTINUED
<str> contains a continued file - cannot append Explanation: The output tape contains a file which is continued on the next tape volume. The backup file can not be written on this tape unless it is initialized. User Action: Initialize the tape or use another tape.
40.450 – CONVERR
data conversion error Explanation: The database management system was unable to convert the data item from one data type to another. User Action: Enter another value.
40.451 – CORPAGPRES
Corrupt or inconsistent pages are present in area <str> Explanation: The operation can not proceed because there are known corrupt or inconsistent pages in the named storage area. User Action: Correct corrupt pages (or areas) by executing restore and recover operations. Inconsistent pages (or areas) can be made consistent with the RMU Recover command.
40.452 – CORRUPT
storage area is corrupt, indices cannot be verified Indices cannot be verified Explanation: Found corrupted storage area. User Action: Verify the database using the Areas qualifier to find out which storage area is corrupted.
40.453 – CORRUPTPG
Page <num> in area <str> is marked as corrupt. Explanation: The specified page is unreadable. User Action: Use the RMU Restore command to restore the page from a backup. Then use the RMU Recover command to apply any changes since the last backup.
40.454 – CORRUPTSP
Snapshot page <num> for live area <str> is marked as corrupt. Explanation: The specified snapshot page is unreadable. User Action: Use the RMU Repair or the RMU Set Corrupt_Pages command to reset the snapshot page.
40.455 – CORUPTFLG
area <str> is marked corrupt Explanation: The corrupt flag is set indicating that this area may be corrupt. This could be caused by an aborted utility or a batch update transaction. Verification of the FILID continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.456 – CORUPTSNP
snapshot area <str> is marked corrupt. Explanation: The corrupt flag is set indicating that this area may be corrupt. Verification of the FILID continues. User Action: Correct the error with the RMU Repair command and verify the database again.
40.457 – CPTCHNG
Corrupt Page Table for storage area <str> has changed during an online restore Explanation: During an online RMU Restore operation in which the Just_Corrupt qualifier was specified, the Corrupt Page Table of a storage area became full. When a Corrupt Page Table becomes full, the storage area is marked as corrupt. User Action: Repeat the online RMU Restore with the Just_Corrupt qualifier.
40.458 – CPTFULERR
The corrupt page table has overflowed. Explanation: The maximum number of individual corrupt pages that can be processed in one RMU Restore or RMU Recover command is 127. User Action: Reduce the number of corrupt pages to be processed by restoring one or more entire areas to reduce the number of corrupt pages that need be recorded. Or, restore and recover the corrupt pages in multiple operations rather than a single operation.
40.459 – CPTISFULL
Page <num> in area <num> caused the corrupt page table to overflow. Explanation: You are trying to mark a page corrupt, but an attempt to insert that page caused the corrupt page table to overflow. The storage area for this page had the most entries in the corrupt page table. That area was marked as corrupt and all entries for corrupt pages in that area were removed from the corrupt page table. Since that entire area is now corrupt, this page was not added to the corrupt page table. User Action: Use RMU RESTORE to restore an uncorrupted version of the storage area.
40.460 – CSIBADPARAM
Network error: Invalid parameter. Explanation: This error indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.461 – CSIREADERR
Network error: Error on read. Explanation: An error was encountered while attempting a read operation. This can happen because your server terminated abnormally or there is an internal error. User Action: Check that your server is still running. If this is not the problem, contact your Oracle support representative for assistance.
40.462 – CSITERMINATE
Network error: Process should terminate. Explanation: The monitor has sent a terminate message to this process. The process should do cleanup work and terminate. User Action: Contact your Oracle support representative for assistance.
40.463 – CSIWRITERR
Network error: Error writing to file. Explanation: An error was encountered while attempting a write operation. This can happen because your server terminated abnormally or there is an internal error. User Action: Check that your server is still running. If this is not the problem, contact your Oracle support representative for assistance.
40.464 – CSI_NYI
Network error: feature not yet implemented. Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.465 – CVRTUNS
The minimum database version that can be converted is version <num>. Explanation: Your database version is too old to be converted. User Action: You must convert your database using 2 versions of Oracle Rdb. First convert your database using the minimum version of Oracle Rdb specified in the error text. Next convert again using the version of Oracle Rdb that reported the original error. For example, Oracle Rdb V7.0 can only convert Oracle Rdb databases V5.1 or greater. If you are trying to convert from 4.2 to 7.0, then you must first convert using Oracle Rdb V5.1, V6.0 or V6.1. Then you can convert the database again using V7.0.
40.466 – DATACMIT
unjournaled changes made; database may not be recoverable Explanation: Changes have been made to the database while AIJ journaling was disabled. This may result in the database being unrecoverable in the event of database failure; that is, it may be impossible to roll-forward the after-image journals, due to a transaction mis-match or attempts to modify objects that were not journalled. User Action: IMMEDIATELY perform a full database backup. Following successful completion of the full database backup, the after-image journals may be backed up.
40.467 – DATAEXCEED
Data may not be unloaded as unloaded record length may exceed the RMS limit of 32767 Explanation: The sum of field(s) length exceeds the RMS limit of 32767. Data exceeding the limit can't be unloaded. User Action: Make sure sum of field(s) length does not exceed 32767.
40.468 – DATATBLCMIT
logical area <num> marked corrupt; unjournaled changes made to user-defined object Explanation: Changes have been made to the database while AIJ/RUJ journaling was disabled. The specified area cannot be properly recovered. User Action: Drop the area.
40.469 – DATCNVERR
conversion error in date string Explanation: The date string is not in one of the legal formats. As a result, it cannot be converted to the DATE data type. User Action: Re-enter the DATE data item in one of the correct formats.
40.470 – DATNOTIDX
Row in table <str> is not in any indexes. Logical dbkey is <num>:<num>:<num>. Explanation: The row with the specified dbkey should exist in all indexes defined for the table, but it is not in any of them. User Action: Recreate the indexes for the table.
40.471 – DB700NOTSUP
<str> a pre-beta test 5 version of a T7.0 database is not supported Explanation: You have attempted to convert a T7.0 database that was created with a version of Oracle Rdb software prior to beta test 5 or you have attempted to restore a T7.0 database that was backed up with a version of Oracle Rdb software prior to beta test 5. This is not supported. User Action: Do one of the following: o Recreate the database using the current version of Oracle Rdb. o Unload the database using the Oracle Rdb T7.0 beta test software and then load the database using the current version of Oracle Rdb. o Export the database using the Oracle Rdb T7.0 beta test software and then import the database using the current version of Oracle Rdb.
40.472 – DBACTIVE
database is already being used Explanation: You attempted to open a database that is already being used. You can only open a database that is not being accessed. User Action: Wait for all users to finish using the database, or force the users off by closing the database.
40.473 – DBBUSY
database is busy - try again later Explanation: You attempted to access a database that is shut down. User Action: Wait for the database to become available, and try again.
40.474 – DBCRUPT
database is corrupt Explanation: Your database is not a valid Oracle Rdb database. This can happen if the SQL DEFINE DATABASE statement does not terminate normally. User Action: Create your database again.
40.475 – DBKSTRTYP
storage type of line <num>:<num>:<num> is <num> Explanation: There is probably some corruption in the line. User Action: Check if the storage type is correct for the line in question. The storage type for the relations is the relation identifier and for indices, it is a constant.
40.476 – DBMODIFIED
database has been modified; AIJ roll-forward not possible Explanation: The database has been modified. Consequently, performing a "full" roll forward of an after-image journal is not possible, because the transaction integrity of the database would be compromised by such an operation. Note that the AIJ roll-forward utility sometimes converts the /AREA or /PAGE roll- forward operation into a "full" roll-forward operation, if all of the specified objects do not need recovery. In this case, this message can be received even when the /AREA or /PAGE qualifiers are explicitly specified by the user. User Action: An after-image journal MUST be rolled forward BEFORE any database modifications are made. In addition, "by area" and "by page" after-image journal roll forward operations are permitted.
40.477 – DBNOAIJ
database does not have AIJ enabled Explanation: You attempted to start an AIJ log server for a database that does not have AIJ enabled. User Action: Enable AIJ for the database, and try again.
40.478 – DBNOAIJFC
database does not have AIJ "fast commit" enabled Explanation: You attempted to start an AIJ Log Server for database replication purposes on a database that does not have the AIJ "fast commit" feature enabled. User Action: Enable the AIJ "fast commit" feature for the database, and try again.
40.479 – DBNOGB
database does not have global buffers enabled Explanation: The database cannot be opened with the specified global buffer parameters because the database does not have global buffers enabled. User Action: Retry the open operation without specifying global buffer parameters.
40.480 – DBNOTACTIVE
database is not being used, or must be manually opened first Explanation: You attempted to close a database that is not open, or you attempted to access a closed database that requires manual open. User Action: There is no need to close the database - it is already closed. If you are attempting to access a closed database that requires manual open, open the database first.
40.481 – DBNOTOPEN
database is not open for access Explanation: The database must be opened to allow users to access it. User Action: Open the database and try again.
40.482 – DBOPNNOTCOMP
database is open on another node in a mode not compatible with this node Explanation: Another node has already opened the database and the database uses some feature that makes it impossible to concurrently open the database on this node. For example, if Row Cache is enabled, then all nodes must be able to share memory (OpenVMS Galaxy). If global buffers are enabled then every node that is a member of the same Galaxy system must use the same global buffer parameters when opening the database.
40.483 – DBRABORTED
database recovery process terminated abnormally Explanation: A detached database recovery process failed to recover a transaction. User Action: Examine the database monitor log file and any SYS$SYSTEM:*DBRBUG.DMP bugcheck dump files for more information.
40.484 – DBRBOUND
attach not allowed while your process is being recovered Explanation: The database recovery process (DBR) is currently recovering an image for your process. While the recovery operation is running, you cannot start another image that attempts to attach/bind to the database. User Action: You can attach/bind to another database. Otherwise, you must wait for the database recovery process to complete recovery of your previous image.
40.485 – DBSHUTDOWN
database shutdown is in progress Explanation: The request you made could not be completed because the database is being shut down. User Action: Examine the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.486 – DB_CVT_FAIL
Cannot convert from version V<num>.<num> to V<num>.<num> Explanation: You cannot restore the database unless it can be converted to the target version of the restore operation. User Action: Restore this file with the same or a higher version of Oracle Rdb than was used to perform the backup operation.
40.487 – DEADLOCK
deadlock on <str> Explanation: The operation you attempted has been forbidden by the database management system because it would have led to a system deadlock. User Action: Execute a ROLLBACK or a COMMIT to release your locks, and try the transaction again.
40.488 – DEFCMPNOCMP
The specified multiple tape density should be DEFAULT, COMPACTION or NOCOMPACTION Explanation: A multiple tape density value has been specified for this SCSI or TA90, TA90E or TA91 tape device that is invalid. A value of COMPACTION, NOCOMPACTION or DEFAULT should have been specified with the DATA_FORMAT qualifier. The incorrect multiple tape density value will be ignored and a correct value assumed. User Action: Consider specifying the DEFAULT, COMPACTION or NOCOMPACTION value for this device using the DATA_FORMAT qualifier.
40.489 – DEFREADER
The number of reader threads has been adjusted to <num>. Explanation: The /THREADS value exceeds either the number of database storage areas or is less than the number of input devices. The value is being changed to the next possible valid value. User Action: None. To avoid this message specify the number of reader threads displayed in this message.
40.490 – DEFWRITER
The number of writer threads has been adjusted to <num>. Explanation: The /WRITER_THREADS value exceeds either the number of database storage areas or the number of output devices. This would create writer threads with no work to do. The value is being changed to the largest possible valid value. User Action: None. To avoid this message specify the number of writer threads displayed in this message.
40.491 – DELETEFAILS
Cache file <str> deletion failed Explanation: Attempt made to delete non existing file. User Action: Check for the existence of the cache file.
40.492 – DELIMQUAL
Qualifier only valid with delimited text "<str>". Explanation: A qualifier that is appropriate only when delimited text is being used in a record definition has been specified without also specifying the Format=Delimited_Text qualifier. User Action: If the record definition really uses delimited text, then specify the Format=Delimited_Text qualifier.
40.493 – DELPRC
database attach has been terminated Explanation: The user's attach has been terminated. User Action: This error message indicates that a request was made to eliminate this user's database attach. The termination may have been requested due to a database action such as closing the database with the ABORT=DELPRC option, or potentially an unrecoverable error was encountered by the database system that necessitated terminating the user.
40.494 – DELROWCOM
For DELETE_ROWS or FLUSH=ON_COMMIT the COMMIT_EVERY value must equal or be a multiple of the ROW_COUNT value.The COMMIT_EVERY value of <num> is not equal to or a multiple of the ROW_COUNT value of <num>. Explanation: For DELETE_ROWS or FLUSH=ON_COMMIT the COMMIT_EVERY value must equal or be a multiple of the ROW_COUNT value to prevent possible loss of data written to the unload file if there is an error. The COMMIT_EVERY value is not equal to or a multiple of the ROW_COUNT value. User Action: Specify a COMMIT_EVERY value that is equal to or a multiple of the ROW_COUNT value.
40.495 – DENSITY
<str> does not support specified density Explanation: The specified tape device does not support the requested tape density. User Action: Specify a supported tape density or use the default for the device.
40.496 – DEPR_FEATURE
Deprecated Feature: <str> (replaced by <str>) Explanation: The qualifier specified will be removed from RMU in a future release. It has been replaced with another qualifier. The Noareas qualifier for the RMU Move_Areas command has been replaced by the All_Areas qualifier because the Noareas qualifier was confusing and misleading. User Action: Switch to using the All_Areas qualifier in place of the No_Areas qualifier.
40.497 – DIRNOTFND
directory not found. Explanation: The specified directory does not exist on the specified device. User Action: Verify that the device and/or directory are specified correctly. Create the directory if necessary, or specify an existing directory.
40.498 – DISABLEDOPTION
The <str> option is temporarily disabled and will be ignored Explanation: The option specified is currently disabled and will be ignored. User Action: Do not specify this option to avoid this warning message.
40.499 – DLMNOTFND
<str> (<str>) not found for column <num> of row <num> in the input. Explanation: Either the prefix, or the suffix, or the separator, or the terminator was not found. User Action: Correct the input file, and reissue the command.
40.500 – DOENBLAIJ
after-image journaling must be enabled to ensure recovery Explanation: After adding an AIJ journal, it is necessary to enable AIJ journaling (if it is not already enabled). Failure to enable AIJ journaling will result in the AIJ file being NOT recoverable. User Action: IT IS HIGHLY RECOMMENDED that after-image journaling be enabled AS SOON AS POSSIBLE.
40.501 – DOFULLBCK
full database backup should be done to ensure future recovery Explanation: After enabling AIJ journaling, it is often necessary to perform a full (i.e., not incremental) database backup. Failure to backup the database may result in the AIJ file NOT being recoverable. User Action: IT IS HIGHLY RECOMMENDED that a full database backup be performed AS SOON AS POSSIBLE.
40.502 – DO_FULL_BACKUP
A full backup of this database is recommended upon completion of the RMU Convert command Explanation: After-image journaling was disabled during the RMU CONVERT. Any existing backups of this database are now obsolete. User Action: Do a full RMU BACKUP of this database.
40.503 – DSEGDBKEY
Data segment is at logical dbkey <num>:<num>:<num>. Explanation: This message is issued when segmented string context is dumped after a possible corruption is found. It reports the logical dbkey of the data segment currently being verified. User Action: This message is informational. No action is required.
40.504 – DTYPECVTERR
data type conversion error Explanation: A conversion error occurred during formatting of database metadata. User Action: See the secondary message for more information.
40.505 – DUPAIJFIL
duplicate AIJ filename "<str>" specified Explanation: A duplicate AIJ file name was specified during AIJ journal addition. Each AIJ file name is used to identify a specific journal and must be unique within a database. User Action: Please specify a unique AIJ filename.
40.506 – DUPAIJNAM
duplicate AIJ name "<str>" specified Explanation: A duplicate AIJ name was specified during AIJ journal addition. Each AIJ name is used to identify a specific journal and must be unique within a database. User Action: Please specify a unique AIJ name.
40.507 – DUPBTRDBK
Dbkey of duplicate node for data node is <num>:<num>:<num> Explanation: This message gives the dbkey of the duplicate node, so that the integrity of the index can be verified manually, if necessary. User Action: Ascertain if the index is corrupt by manually verifying the owner node after dumping that page. If the index is corrupt, rebuild it.
40.508 – DUPCCHNAM
record cache "<str>" already exists Explanation: A duplicate record cache name was specified. The name used to identify a cache must be unique within a database. User Action: Please specify a unique record cache name.
40.509 – DUPEXECNAME
Executor name "<str>" has already been used. Explanation: In specifying a plan file, the same executor name was used multiple times. User Action: Make each executor name unique.
40.510 – DUPFILNAM
same storage area filename <str> for area ids <num> and <num> Explanation: Two storage areas were found pointing to the same physical filename. This situation can happen during an RMU Convert command if an options file were used and more than one storage area name was assigned to the same physical filename. User Action: Correct any errors in the convert options file and rerun the RMU Convert command.
40.511 – DUPHSHDBK
Dbkey of duplicate hash bucket for data node is <num>:<num>:<num> Explanation: This message gives the dbkey of the duplicate hash bucket so the integrity of the index can be verified manually, if necessary. User Action: Ascertain if the index is corrupt by manually verifying related system records and hash buckets after dumping pages of the database. If the index is corrupt, rebuild it.
40.512 – DUPLAREAID
Logical area id <num> has been specified more than once Explanation: The logical area id number displayed has been specified more than once. User Action: Correct the error and try again.
40.513 – DUPNOTFND
duplicate B-tree node not found at dbkey <num>:<num>:<num> Explanation: A duplicate B-tree index node was expected at the given dbkey, but was not found. The pointer to the duplicate node in the B-tree is probably corrupt. User Action: Ascertain if the index is corrupt by manually verifying related index nodes after dumping pages of the database. Rebuild the index if it is corrupt.
40.514 – DUPOWNDBK
Dbkey of owner of this duplicate node is <num>:<num>:<num> Explanation: There is an error with a duplicate node. This message gives the dbkey of the owner of the duplicate node, so that the integrity of the index can be verified manually, if necessary. User Action: Ascertain if the index is corrupt by manually verifying the owner node after dumping that page. Rebuild the index if it is corrupt.
40.515 – DUPRELNAM
same relation name <str> for relation ids <num> and <num> Explanation: Two relations were found with the same name. User Action: If having two relations with the same name indicates your database is corrupted, restore the database and roll forward.
40.516 – DUPSTAREA
same area name <str> for area ids <num> and <num> Explanation: Two storage areas were found with the same name. User Action: Restore the database, roll the database forward, and verify it again.
40.517 – DYNPCL
deposit not allowed, dynamic data item Explanation: You attempted to deposit a dbkey value into a dynamic data item cluster. User Action: This is not allowed. Dbkeys are only allowed in set clusters.
40.518 – EDTSTRUNC
filename edits "<str>" truncated Explanation: Internal represenation of filename edits limited to a maximum of 31 characters. Specified filename edits truncated to the maximum of 31 characters. User Action: No action required. If desired, the filename edit specification can be shortened.
40.519 – EMPTYAIJ
after-image journal file is empty Explanation: A recovery operation was attempted on an empty after-image journal file, or the UNTIL time predates any journaled transactions. The former can happen if no transactions were initiated while after-image journaling was in progress. User Action: Correct the error and try again.
40.520 – EMPTYFILE
<str> file is empty Explanation: The file is empty. User Action: None.
40.521 – ENCRYPTALGMIS
Encryption algorithm name is missing. Explanation: The /ENCRYPTION=(ALGORITHM=algorithm_name) is missing. Action: Specify a valid encryption algorithm with your key.
40.522 – ENCRYPTKEYVAL
Specify encryption key name or a key value but not both. Explanation: Either /ENCRYPTION=(NAME=key-name) or, /ENCRYPTION=(VALUE=key-value) is missing or, both have been specified.
40.523 – ENCRYPTNOMAT
decryption parameters do not match encryption parameters Explanation: The specified parameters for decryption do not match the parameters used for encryption. User Action: Specify the key value and key algorithm used to create the save set.
40.524 – ENCRYPTSAVSET
save set is encrypted, /ENCRYPT must be specified Explanation: Saveset is encrypted and no encryption parameters specified. User Action: Specify the key value and key algorithm used to create the save set.
40.525 – ENCRYPTUSED
Encryption key required when future restore performed. Explanation: This backup has been encrypted. Therefore any future restore of this backup file will require the same encryption key that was used to create this backup file. User Action: Make sure the encryption key used for this backup is saved since it will be required to restore this backup.
40.526 – ENDEXTRACT
elapsed time for metadata extract : <time> Explanation: This message provides the elapsed time for the requested metadata to be extracted. This information is provided to help you schedule extracts from this database in the future.
40.527 – ENDVERIFY
elapsed time for verification : <time> Explanation: The elapsed time for the complete verification is given in this message to help schedule future verifications of the database.
40.528 – ENQFAILURE
Unable to enqueue lock. Explanation: A lock required by RMU was not granted by the lock manager. User Action: Refer to the secondary message to identify why the lock was not granted. If the secondary message specifies a user correctable action (such as EXQUOTA), correct the problem. If the secondary message specifies an internal error (such as ACCVIO), contact your Oracle support representative for assistance.
40.529 – ENVALRDYALLOC
Network error: Environment already allocated. Explanation: An attempt was made to initialize the RMU client environment more than once. User Action: Contact your Oracle support representative for assistance.
40.530 – ERRDATFET
error fetching data record from B-tree index node <num>:<num>:<num> Explanation: An error occurred during a fetch of a data record from a B-tree node. User Action: Ascertain if the index is corrupt by manually verifying related index nodes after dumping pages of the database. If the index is corrupt, rebuild it.
40.531 – ERRDET
an error was detected Explanation: An "error level" (-E-) error condition was detected by RMU and displayed during the execution of an RMU statement. User Action: If possible, run the statement(s) again using RMU and read any additional RMU error messages to determine what caused the error condition. Then fix the error.
40.532 – ERRDEVCHO
Error reading device characteristics for process output. Explanation: An attempt to determine device characteristics of the output file for the process failed with an error. See secondary error for more information.
40.533 – ERRDPHBKT
error fetching duplicate hash bucket Explanation: An error occurred during a fetch of a duplicate hash bucket. User Action: Ascertain if the index is corrupt by manually verifying related system records and hash buckets after dumping pages of the database. If the index is corrupt, rebuild it
40.534 – ERRDSGFET
Error fetching data segment from segmented string. Explanation: An error occurred during an attempt to fetch a data segment from a segmented string. See accompanying messages for the segmented string context at the time of the error. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.535 – ERRDSGHDR
Errors found verifying a data segment header. Explanation: This messages indicates that errors were found during verification of the KODA storage record header of a segmented string's data segment. See accompanying messages for the specific errors and for the segmented string context. User Action: This message is informational. No action is required.
40.536 – ERRDUPFET
error fetching duplicate B-tree index node Explanation: An error occurred during a fetch of a duplicate B-tree node. User Action: Ascertain if the index is corrupt by manually verifying related index nodes after dumping pages of the database. Rebuild the index if it is corrupt.
40.537 – ERREXCCMD
Error executing command "<str>". Explanation: The executor encountered an error executing the specified command. User Action: Examine the secondary message or messages. Correct the error and try again.
40.538 – ERREXCPLN
Error executing plan file <str>. Explanation: The executor encountered an error executing the specified plan file. User Action: Examine the secondary message or messages. Correct the error and try again.
40.539 – ERRFOREIGN
error opening foreign command file as input Explanation: An error occurred during the reading of a foreign command file. User Action: Examine the secondary message for more information.
40.540 – ERRGATFRG
error gathering fragmented record at <num>:<num>:<num> Explanation: An error occurred when gathering the segments of a fragmented record were gathered. Other related error messages may give more information. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.541 – ERRHSHBKT
error fetching hash bucket Explanation: An error occurred during a hash bucket fetch. User Action: Ascertain if the index is corrupt by manually verifying related system records and hash buckets after dumping pages of the database. If the index is corrupt, rebuild it.
40.542 – ERRMETA
Error getting metadata <str>. Explanation: An error occurred while getting database metadata. User Action: Examine the secondary message or messages. Correct the error and try again.
40.543 – ERROPENIN
error opening <str> as input Explanation: An error occurred during opening of an input file. User Action: Examine the secondary message for more information.
40.544 – ERROPENOUT
error opening <str> as output Explanation: An error occurred during opening of an output file. User Action: Examine the secondary message for more information.
40.545 – ERRPSGFET
Error fetching pointer segment of segmented string. Explanation: An error occurred during an attempt to fetch a pointer segment from a segmented string. See accompanying messages for the segmented string context at the time of the error. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.546 – ERRPSGHDR
Errors found verifying a pointer segment header. Explanation: This messages indicates that errors were found during verification of the KODA storage record header of an indexed segmented string's pointer segment. See accompanying messages for the specific errors and for the segmented string context. User Action: This message is informational. No action is required.
40.547 – ERRRDBIND
error accessing RDB$INDICES relation Explanation: Unable to get the list of indexes from the RDB$INDICES system relation. Indexes cannot be verified if requested. Very minimal verification can be performed. User Action: Rebuild the indexes.
40.548 – ERRRDBREL
error accessing RDB$RELATIONS relation Explanation: It is not possible to get information from the RDB$RELATIONS system relation. Only minimum verification can be done. User Action: The database may need to be restored.
40.549 – ERRRDBSEG
Error getting segmented string data from system tables. Segmented strings will not be verified. Explanation: It is not possible to get information about segmented strings from the system tables. Segmented strings will not be verified. User Action: The database may need to be restored.
40.550 – ERRSEGFET
Error fetching segmented string's primary segment. Explanation: An error occurred during an attempt to fetch the primary (first) segment from a segmented string. See accompanying messages for the segmented string context at the time of the error. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.551 – ERRSYSFET
Error fetching system record. Explanation: An error occurred during an attempt to fetch the system record. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.552 – ERRSYSREL
error accessing some system relations Explanation: It is not possible to get information from some system relations. Only minimum verification can be done. User Action: The database may need to be restored.
40.553 – ERRWRITE
error writing file Explanation: An error occurred during a file write. User Action: Examine the secondary message for more information.
40.554 – EXCCGRMAX
Exceeded maximum limit of columns for column group in table <str>. Explanation: The maximum number of columns (15) in a column group has been exceeded. User Action: Correct the column group and try again.
40.555 – EXECDISCONN
Network error: Executor needs to be unbound. Explanation: Executor needs to be unbound. User Action: Unbind the executor.
40.556 – EXECFAIL
Network error: Executor failure. Explanation: The executor was unable to complete the client request. User Action: Contact your Oracle support representative for assistance.
40.557 – EXECPROTERR
Network error: Executor protocol violated by API driver. Explanation: The executor protocol has been violated by the API driver. This error occurs if the API driver issues a request such as receive, send, and so on, before binding to an executor. This error also occurs if the API driver issues unbind twice in a row. User Action: Issue requests in the correct order.
40.558 – EXECUNBOUND
Network error: Executor is in an unbound state. Explanation: Executor is in an unbound state. User Action: Bind to the executor.
40.559 – EXNODECNT
database cannot be opened on this node -- maximum node count (<num>) exceeded Explanation: The database cannot be opened on this node, because it has already been opened on the maximum allowable number of nodes. User Action: Consider increasing the maximum number of nodes configured for the database.
40.560 – EXPORTCOR
Export file <str> is corrupt Explanation: The format of the file does not match the Oracle Rdb interchange format specification. Either the EXPORT operation used features not not available in this version of Oracle Rdb, or the file itself has become corrupted. User Action: None.
40.561 – EXQUOTA
exceeded quota Explanation: The image could not proceed because a resource quota or limit had been exceeded. User Action: The secondary error message describes the resource that was exceeded. If this occurs consistently, increase your quota.
40.562 – EXTRADATA
Extra data in row <num> has been ignored. Explanation: A row of a table has been successfully loaded, but extraneous data at the end of the delimited text record has been ignored. User Action: If it is expected that there are more columns of data in the delimited text input file than there are columns in the table being loaded, no action is necessary. Otherwise, check the results of the load and reissue the command.
40.563 – EXTRAREADERS
"<num>", the number of reader threads, exceeds "<num>", the numberof output files or master tape devices. Explanation: If restoring from tape devices the number of tape volumes specified by "/VOLUMES" may exceed the number of MASTER tape devices. For "/DISK_FILE" restores the number of "/READER_THREADS" specified may be larger that the the number of backup files in the backup set. For "/LIBRARIAN" restores the number of "/READER_THREADS" specified may be greater than the number of backup files or tape volumes in the media manager backup set. User Action: Correct any incorrect values specified by the /READER_THREADS or /VOLUMES qualifiers if one of them was specified on the command line and repeat the restore. Otherwise contact your Oracle support representative for assistance.
40.564 – EXTSRTSTAT
Records:<num> Merges:<num> Nodes:<num> WorkAlq:<num> Explanation: During extraction operations, statistics are often collected to aid the user in tuning. This message displays statistics.
40.565 – FAILCSETTBL
A character set information table was not created Explanation: The RMU Extract command could not create a table containing character set information. User Action: Call the Customer Support Center.
40.566 – FAILKEYTBL
An SQL keyword table was not created Explanation: The RMU Extract command could not create a table containing SQL keywords. User Action: Call the Customer Support Center.
40.567 – FAILOOKTBL
The lookup of an SQL keyword failed Explanation: The RMU Extract command was unable to look up an SQL keyword. User Action: Contact your Oracle support representative for assistance.
40.568 – FATALERR
fatal error on <str> Explanation: The operation was aborted because very severe errors were detected. User Action: Examine the other errors reported and take the appropriate corrective action.
40.569 – FATALOSI
Fatal error from the Operating System Interface. Explanation: An unexpected and unhandled error occurred in an operating system or library function. User Action: Refer to the operating system or library documentation for further explanation and possible corrective action.
40.570 – FATALRDB
Fatal error while accessing Oracle Rdb. Explanation: An unexpected and unhandled error occurred while accessing Oracle Rdb. The preceding messages describe the problem. User Action: Refer to the Oracle Rdb documentation for further explanation and possible corrective action.
40.571 – FIDCKSBAD
this filid contains an invalid checksum expected: <num>, found: <num> Explanation: The checksum on the FILID entry is incorrect. Verification of the FILID continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.572 – FIDCSMFLG
the CHECKSUM flag must be set but it is not Explanation: The CHECKSUM flag must be set. Verification of the FILID continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.573 – FIDPNOBAD
the highest page of a CALC SET is greater than the maximum page number of the area Maximum page number is <num>, highest page number is <num> Explanation: The highest page number of a CALC SET is greater than the maximum page number of the area. Verification of the FILID continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.574 – FILACCERR
error <str> file <str> Explanation: A file-access error occurred. User Action: Examine the secondary message for more information.
40.575 – FILCREERR
Network error: Error creating file. Explanation: An error was encountered creating a new file. User Action: Examine the secondary message or messages. Correct the error and try again.
40.576 – FILELOCK
file currently locked by another user. Explanation: An attempt to open or create a file failed. Another user has the file open in a mode incompatible with the attempted access. User Action: Wait until the other user has unlocked the file. If the file cannot be shared, modify the program to detect and respond to this condition by waiting.
40.577 – FILEQUAL
Qualifier only valid on files "<str>". Explanation: A qualifier that is valid only for backup to files is being used with something that is not a file. User Action: Do not use this qualifier, or change the type of device.
40.578 – FILEXISTS
the specified file already exists Explanation: An attempt was made to create a file but the specified already exists. User Action: If this error is returned while attempting to perform an RMU Backup operation using the Librarian qualifier, then it indicates that the backup filename given in the command line already exists in the media manager's storage system. Either provide a new unique backup filename or use the Replace keyword with the Librarian qualifier to instruct the media manager to replace the existing backup file with a new one.
40.579 – FILNOTFND
file not found Explanation: An error occurred when an attempt was made to open a nonexistent storage file. User Action: If a storage file does not exist in the expected directory, it must be restored if it was moved accidentally, or the database needs to be restored from the last backup.
40.580 – FILOPNERR
Network error: Error opening file. Explanation: An error was encountered opening an existing file. User Action: Examine the secondary message or messages. Correct the error and try again.
40.581 – FILREADERR
Network error: Error reading file. Explanation: An error was encountered reading from a file. User Action: Examine the secondary message or messages. Correct the error and try again.
40.582 – FILSIGNATURE
standby database storage area signature does not match master
database
Explanation: The number of storage area slots ("reserved"), or
the specific storage area page size, are not identical on both
the master and standby databases.
User Action: Make sure both the master and standby database
storage area configurations are identical. Do not change any
storage area page size when restoring the databases.
40.583 – FLDMUSMAT
Specified fields must match in number and datatype with the unloaded data Explanation: The unloaded data can not be loaded because the specifications of the unloaded data and the target relation are incompatible. User Action: Check the use of the Fields qualifier.
40.584 – FLDNOTFND
Referenced global field (<str>) was not defined Explanation: This field was referenced by the relation definition but not found in the metadata, or it was specified in the command but not referenced by the relation definition. User Action: Validate and correct the database metadata or the Fields qualifier.
40.585 – FLDNOTUNL
Referenced global field (<str>) not unloaded Explanation: This field was referenced by the table definition but not found in the metadata, or it was specified in the command but not referenced by the table definition. User Action: Validate and correct the database metadata or the Fields qualifier.
40.586 – FRACHNPOS
pointed to by fragment on page <num>, line <num> Explanation: This message is printed after a fragment chain verification error has occurred indicating the previous storage record occurrence of the invalid fragment chain.
40.587 – FRAOFFPAG
area <str>, page <num>, line <num> Offset for the end of the storage record is <num> Which is greater than the page size of <num>. Explanation: The location information stored in the line index for a storage segment indicates that part or all of the storage segment is beyond the end of the page. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification
40.588 – FREEBADBUFF
attempt to free busy (or free) buffer Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.589 – FTL_INS_STAT
Fatal error for INSERT OPTIMIZER_STATISTICS operation at <time>
40.590 – FULLAIJBKUP
partially-journaled changes made; database may not be recoverable Explanation: Partially-journalled changes have been made to the database. This may result in the database being unrecoverable in the event of database failure; that is, it may be impossible to roll-forward the after-image journals, due to a transaction mis-match or attempts to modify objects that were not journalled. This condition typically occurs as a result of replicating database changes using the Hot Standby feature. User Action: IMMEDIATELY perform a full (not by-sequence) quiet-point AIJ backup to clear the AIJ journals, followed immediately by a full (no-quiet-point allowed) database backup.
40.591 – FUTURESTATE
Area <str> represents a future state of the database Explanation: The state of this area is inconsistent with the database and represents a future state of the database. The entire database must be brought forward in time to this future state. User Action: None.
40.592 – GAPONPAGE
unaccounted gap on page <num> free space end offset : <num> (hex) minimum offset of any line : <num> (hex) Explanation: A gap was found between the end of free space and the beginning of the line closest to the beginning of the page. This could be caused by the corruption of locked free space length, free space length, or the line index. User Action: Dump the page in question to determine the corruption. Restore the database and verify again.
40.593 – GETTXNOPTION
Do you wish to COMMIT/ABORT/IGNORE this transaction: Explanation: Ask the user whether to COMMIT/ABORT/IGNORE this blocked transaction.
40.594 – GETTXNOPT_NO_IGN
Do you wish to COMMIT/ABORT this transaction: Explanation: Ask the user whether to COMMIT/ABORT this blocked transaction. User Action: None.
40.595 – GOODAIJSEQ
AIJ file is correct for roll-forward operations Explanation: The specified AIJ file is the correct file to be rolled forward. User Action: No user action is required. This message is informational only.
40.596 – GTRMAXFLT
The float value is greater than maximum allowed. Explanation: The float value specified for the named option is too large. User Action: Use a float value that is less than the maximum value and try again.
40.597 – HASHINDID
error getting the hash indices database id Explanation: It was not possible to get the hash indexes' database ids (DBID's). User Action: Rebuild the indexes.
40.598 – HAZUSAGE
<str> specified for <str>, Corruption may occur if the value was not correct Explanation: The area qualifiers specified in this command do not agree with the values stored in the root file that was lost. RMU displays this message only to warn you that setting the option to an incorrect value may render the database inaccessible or otherwise corrupt it. User Action: Specify the option on the command line only if the value for the option has changed since the backup file was created. If you are certain the value is correct, then do nothing. Otherwise, determine the correct value for the option and repeat the operation using the correct value.
40.599 – HDRCRC
software header CRC error Explanation: A media error was detected in the tape block header data. This can also result from an attempt to process a file that was not created by RMU; for example, a OpenVMS BACKUP saveset. User Action: None.
40.600 – HELMTCNT
index <str> hash element <num>:<num>:<num> contains a bad duplicate node count, expected: <num>, computed: <num> Explanation: The count field in the hash index element contains a value different then what was computed. User Action: Rebuild the index.
40.601 – HELMTNEG
hash element containing 1 entry has a bad dbkey pointer expected a positive logical area number, found <num> Explanation: If the data record count is 1, then the logical area database id in the dbkey must be positive. This indicates that the dbkey is pointing to a data record. User Action: Rebuild the index.
40.602 – HELMTPOS
hash element containing more than 1 entry has a bad dbkey pointer expected a negative logical area number, found <num> Explanation: If the data record count is greater than 1, then the logical area database id in the dbkey must be negative. This condition indicates that the dbkey is pointing to a duplicate hash bucket. User Action: Rebuild the index.
40.603 – HIBER
thread requests hibernate Explanation: The currently executing internal thread has requested a short term hibernation. User Action: This state should never be seen by a user application. It is only used by the internal threading mechanisms.
40.604 – HIGHCSNINV
Highest CSN (<num>:<num>) is higher than the CSN that will be assigned next(<num>:<num>). Explanation: The database has a CSN that is higher than the KODA sequence number that will assigned as the next CSN. This indicates that the root file is not consistent.
40.605 – HIGHTSNINV
Highest active TSN (<num>:<num>) is higher than the TSN that will be assigned next (<num>:<num>). Explanation: The database has a TSN that is higher than the KODA sequence number that will assigned as the next TSN. This indicates that the root file is not consistent.
40.606 – HOTACTVTX
Active transactions prevent replication startup Explanation: The Log Catch Server (LCS) is unable to complete the catch-up phase of replication startup. Active transactions prevented the LCS from acquiring a quiet-point within the specified interval. User Action: Ensure that no extremely long-running transaction are active. Increase the LCS quiet-point timeout interval.
40.607 – HOTADDSWTCH
Hot Standby is active and AIJ switch-over suspended - backup existing journals first Explanation: The AIJ switch-over operation is suspended and performing the requested AIJ journal creation while Hot Standby is active could result in replication being terminated. User Action: It is recommended to backup one or more existing AIJ journals before creating the new AIJ journal(s).
40.608 – HOTBCKCONT
continuous AIJ backup not permitted when replication active Explanation: The "continuous" AIJ backup operatoion is not permitted when the database replication is active. User Action: Use the AIJ Backup Server (ABS) process when using multiple AIJ journals, or issue non-continuous AIJ backup operations when using a single AIJ journal.
40.609 – HOTCMDPEND
request cannot be completed due to pending hot standby command Explanation: A Hot Standby command was pervious requested from this node using the /NOWAIT qualifier; the command has not yet completed. The command just requested cannot be completed until the Hot Standby feature command already active but pending on this node has first completed. User Action: The requested Hot Standby command must be completed prior to issuing this command. Use the SHOW USERS command to indicate the status of the Hot Standby command.
40.610 – HOTEXCHMSG
error exchanging replication message Explanation: User Action:
40.611 – HOTFAILURE
hot standby failure: <str> Explanation: A hot standby failure occurred. User Action: Examine the secondary message for more information.
40.612 – HOTLCLCNCT
error allocating local network connection Explanation: User Action:
40.613 – HOTMISMATCH
standby database version does not match master database Explanation: The version time and date stamp in the standby database root does not match the version time and date stamp in master database root. Also, it may be possible that the standby database was backed up and restored, thereby invalidating the abaility to be replicated. User Action: Ensure that the specified standby database is correct and restored from a master database backup file.
40.614 – HOTNOCIRCEXT
cannot switch from circular to extensible AIJ journaling if replication active Explanation: User Action: Terminate database replication first.
40.615 – HOTNOEXTCIRC
cannot switch from extensible to circular AIJ journaling if replication active Explanation: Adding a new AIJ journal is not allowed while database replication is active, if adding the journal would activate circular journaling. User Action: Terminate database replication first.
40.616 – HOTNOONLINE
attempt to access standby database opened for exclusive access Explanation: An attempt has been made to attach to a standby database for which replication has been started with exclusive access. User Action: Stop replication and re-start with "online" access to the standby database.
40.617 – HOTNORC
record cache not allowed on hot standby database during replication Explanation: The record cache feature must be disabled on the hot standby database during hot standby replication. User Action: Open (or re-open) the standby database with the RECORD_CACHE=DISABLED qualifier.
40.618 – HOTOFFLINE
standby database opened for exclusive access Explanation: Hot Standby replication has been started on the master database using exclusive access. This occurs when the /NOONLINE qualifier is used, or the /ONLINE qualifier is not specified during replication startup. When the standby database is in "exclusive" mode, user processes cannot attach to the database. User Action: If exclusive access is not desired, Hot Standby replication must be terminated and restarted using the /ONLINE qualifier.
40.619 – HOTRECVMSG
error receiving replication message Explanation: User Action:
40.620 – HOTREMCNCT
error allocating remote network connection Explanation: User Action:
40.621 – HOTREMDELT
error deleting replication connection Explanation: User Action:
40.622 – HOTREMDSCT
error disconnecting from replication server Explanation: User Action:
40.623 – HOTRWTXACTV
database in use with active or pre-started read/write transactions Explanation: Database replication cannot be started on the standby database if there are processes with active or pre-started read/write transactions. User Action: All read/write transaction activity must be stopped prior to starting database replication on the standby database.
40.624 – HOTSEQBCK
cannot find AIJ journal required to start replication
Explanation: Database replication using the Hot Standby feature
was attempted to be started, but the AIJ journal required by the
standby database could not be found on the master database.
This typically occurs when the AIJ Backup Server ("ABS")
inadvertantly backs up the AIJ journal on the master database
following an AIJ switch-over operation.
User Action: The journal specified in the Log Catchup Server
("LCS") output file must be manually rolled forward on the
standby database. Alternately, the master database must be
backed up and restored as the standby database.
40.625 – HOTSTOPWAIT
stopping database replication, please wait Explanation: This message informs the user that database replication is being stopped and to wait for shutdown to complete. Replication shutdown times vary based on system and network activity. User Action: Wait for database replication to stop.
40.626 – HOTSVRCNCT
error connecting to replication server Explanation: User Action:
40.627 – HOTSVRFIND
error identifying remote replication server Explanation: User Action:
40.628 – HOTWRONGDB
attempt to start replication for wrong master database Explanation: An attempt has been made to start replication on a master database whose standby database is already replicating a different master database. The master root file name does not match the name used when replication was first started on the standby database. This could happen if you copied or renamed the master database root file, or if the file was created using a concealed logical device name and that logical name is no longer defined. User Action: Ensure that the specified standby database is correct. If so, ensure that replication on the standby database has been fully terminated; replication termination occassionally has long-duration shutdown processing to be performed. If the master database rootfile had been moved, rename or copy the root file back to its original name or location, or redefine the necessary concealed logical device name in the system logical name table.
40.629 – HSHINDCNT
error getting the count of hash indices Explanation: It was not possible to get the count of hash indexes, probably because the system relation is corrupted. User Action: Rebuild the indexes.
40.630 – HSHVFYPRU
hash index verification pruned at this dbkey Explanation: An error occurred during verification of a hash index. Verification will not proceed any further for this hash index. User Action: Verify the page in question, and check if the database needs to be restored.
40.631 – HYPRSRTNOTSUP
Use of HYPERSORT is not supported - SORTSHR logical defined "<str>"
40.632 – IDXAREVFY
Logical area <num> (<str>) needs to be verified as part of index data verification. Explanation: The Data qualifier was specified and verification of the index that was created for the named logical area was requested but verification of the named the logical area was not requested. User Action: Reissue the RMU Verify command with either the Nodata qualifier or with the Larea qualifier listing the named logical area.
40.633 – IDXDATMIS
Index <str> does not point to a row in table <str>. Logical dbkey of the missing row is <num>:<num>:<num>. Explanation: The row with the specified dbkey should exist in the named index but it does not. User Action: Recreate the index for the table.
40.634 – IDXVEREND
Completed data verification of logical area <num>.
40.635 – IDXVERSTR
Beginning index data verification of logical area <num> (<str>).
40.636 – IGNDSBNDX
Ignoring disabled index <str> Explanation: The RMU Analyze Index and RMU Analyze Placement commands do not, by default, process indices which have had maintenance disabled. This message indicates that the specified index will not be analyzed. If you wish to analyze disabled indices, explicitly list the index name on the RMU Analyze command line. Disabled indices should be dropped from the database, when convenient.
40.637 – IGNJNL
<str> journal ignored Explanation: The specific journal is being removed from the recovery list because it does not appear to be required. User Action: None.
40.638 – IGNORACL
Ignoring foreign RMU access control list Explanation: A restore operation was performed on a Windows/NT system using a backup file created on either an OpenVMS or Digital Unix system. Alternatively, a restore operation was performed on an OpenVMS or Digital Unix system using a backup file created on a Windows/NT system. In either case, RMU is not able to restore the access control list because the format of the ACL differs among operating systems. User Action: Use the RMU Set Privilege command to create a root file ACL for the database that meets the security needs of the new platform on which the database was restored.
40.639 – IGNORSCAN
[No]Scan qualifier ignored for online full backup operations Explanation: The [No]Scan_Optimization qualifier was used for an online full backup operation. The purpose of the qualifier is to enable or disable the recording of the identities of regions of the database that have changed since the last full backup. However, this recording state cannot be changed during an online full backup. In order to change the recording state, an offline full backup must be performed using the [No]Scan_Optimization qualifier.
40.640 – IKEYOVFLW
compressed IKEY for index "<str>" exceeds 255 bytes Explanation: The current index key (IKEY) being stored in a sorted or hashed index with compression enabled has exceeded the 255-byte IKEY-length limit during compression. With IKEY compression enabled, some IKEYs may actually increase in size during compression and potentially exceed the 255-byte limit. User Action: Refer to your reference documention for details on controlling the maximum expansion overhead during IKEY compression. Alternatively, recreate the specified index with compression disabled.
40.641 – ILLCHAR
illegal character "<str>" encountered Explanation: A non-alphanumeric character has been detected in the command input stream. User Action: Remove the non-alphanumeric character and try again.
40.642 – ILLDBKDATA
illegal dbkey data, <str> out of range [<num>:<num>] Explanation: The area id, page number, or line number of the dbkey specified in the command is out of range. The allowable limits are specified as [<num>:<num>]. User Action: Correct the error and try again.
40.643 – ILLDBKFMT
illegal dbkey format Explanation: The format of the dbkey specified in the command is invalid. The proper format is <area-number>:<page-number>:<line-number>. User Action: Correct the error and try again.
40.644 – ILLJOINCTX
Multiple tables in UPDATE or DELETE action of trigger <str> Explanation: The delete or update actions are performed in the context of join (CROSS) of multiple database tables. This type of trigger action cannot be extracted in SQL and is no longer recommended in RDO. User Action: Recode the join as a single table reference and subqueries as described in the Oracle Rdb documentation.
40.645 – ILLNCHAR
illegal character found in numeric input Explanation: You specified a number containing a non-numeric character. User Action: Correct the error and try again.
40.646 – ILLNUM
numeric conversion failed on "<str>" Explanation: A number was expected and a non-numeric token was encountered. User Action: Correct the error and try again.
40.647 – ILLSEGTYP
Illegal segmented string type of <num> found. Explanation: A segmented string was found that contained either a secondary chained segment or a secondary pointer segment, when a primary chained segment or primary pointer segment was expected. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Then verify the database again.
40.648 – ILLSPAMCODE
trying to deposit an illegal storage fullness code Explanation: The operation is trying to deposit an illegal space code value into a data page entry on the current space management page. The codes range from 0 to 3 inclusive. User Action: Try the operation again using a value between 0 and 3 inclusive.
40.649 – ILLTIM
ASCII to binary time conversion failed on "<str>" Explanation: An invalid time format was specified in a DEPOSIT TIME_STAMP command. The proper format is "dd-mmm-yyyy hh:mm:ss.cc". User Action: Correct the error and try again.
40.650 – IMGABORTED
image aborted at privileged user request Explanation: The current image was aborted by another privileged user, typically the database administrator, in response to some event that required this action. User Action: Consult the database administrator to identify the reason the image was aborted.
40.651 – INALVAREA
FILID entry <num> is a snapshot area that points to an inactive live area. Explanation: The FILID entry for each snapshot area contains a pointer to the FILID entry for the live area associated with the snapshot area. The live area associated with the named snapshot area is not an active area. User Action: Restore and recover the database from backup.
40.652 – INASPAREA
Live area <str> points to a snapshot area that is inactive. Explanation: The FILID entry for each live area contains a pointer to the FILID entry for the snapshot area associated with the live area. The snapshot area associated with the named live area is not an active area. User Action: Restore and recover the database from backup.
40.653 – INCAPPAREA
incremental restore of <str> to <time> has already been done Explanation: The database area has already been incrementally restored to a time beyond the updates in this backup file. User Action: No action is required. You may want to delete this incremental backup file since it is no longer needed.
40.654 – INCAPPLIED
incremental restore to <time> has already been done Explanation: The database has already been incrementally restored to a time beyond the updates in this backup file. User Action: No action required. You may want to delete this incremental backup file since it is no longer needed.
40.655 – INCFILESPEC
"<str>" is an incomplete file specification Explanation: All parts of a file specification are not present. The device, directory, file name, file type, and version number must all be specified. User Action: Try again using a complete file specification.
40.656 – INCNSTFLG
area <str> is marked inconsistent. Explanation: The inconsistent flag is set for the specified area. This flag is set when a by-area restore operation is executed (the Area qualifier was specified with an RMU Restore command) and the RMU Restore command determines that the area needs to be recovered before it can be made consistent with the rest of the data in the database. User Action: Make the database consistent, using the RMU Recover command.
40.657 – INCONEXECS
Executors are inconsistent on the '<str>' attribute. Explanation: Some executors specified the indicated attribute and other executors omitted the attribute. The attribute must be specified by all executors or by none of the executors. User Action: Modify the executor definitions to be consistent.
40.658 – INCVRPCNT
Vertical partition information is incorrect for dbkey <num>:<num>:<num>. Found <num> pointers for vertical partitions when expecting <num>. Explanation: The first partition of a vertically partitioned record contains pointers to all vertical partitions of this record. This error is displayed when the number of pointer to vertical partitions in the specified dbey does not match the number expected. User Action: Restore and recover the page containing the bad data.
40.659 – INDDISABL
Index <str> is disabled. It will not be verified. Explanation: The index identified in the error message was listed as one of the indices to be verified in the Indexes qualifier to the RMU Verify command, but the index has been marked as disabled. Disabled indices are never verified, because it is inconsequential if they are corrupt. User Action: When listing indices with the Indexes qualifier, do not list disabled ones.
40.660 – INDNOTFND
index <str> does not exist in this database Explanation: There are no indexes with the given name. User Action: Use the SQL SHOW INDEX statement to see which indices exist.
40.661 – INDNTREAD
cannot ready the RDB$INDICES logical area Indices cannot be verified Explanation: Found corrupted storage area, logical area, or indexes. User Action: Rebuild the indices and verify again.
40.662 – INDTOOLONG
indirect command is too long (greater than 511 characters) Explanation: The indirect command you entered is too long (greater than 511 characters). User Action: Enter a shorter RMU command, by either using abbreviations or shorter names.
40.663 – INSFRPGS
physical memory has been exhausted Explanation: Physical memory has been exhausted on the machine, typically because of an excessive number of cache global sections, or excessively large cache global section sizes. User Action: If possible, increase the amount of physical memory on the machine. Reduce the number of cache global sections, or reduce the size of each active cache global sections. Possibly moving a cache from SSB to VLM will also solve this problem. It might be necessary to delete some caches to alleviate this problem. Also, re-configuring the operating system parameters may be necessary to reduce physical memory consumption.
40.664 – INTEGDBDIF
Database filespec must equate to filespec <str> recorded in CDD Explanation: The INTEGRATE database file specification and the file specification recorded with the repository definitions found at the specified path name refer to different databases. User Action: Reissue the command with a file specification that corresponds to the database referenced in the repository, or with a different path name.
40.665 – INTEGFRFAIL
attempt to INTEGRATE FROM nonexistent CDD entity <str> Explanation: The repository entity required for INTEGRATE FROM does not exist. User Action: Respecify a path that points to an existing repository entity.
40.666 – INTERR
Network error: Internal error. Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.667 – INVALFILE
inconsistent database file <str> Explanation: This database file is inconsistent with the root file. This might happen if you have used any unsupported methods for backing up or restoring files; for instance, if you used the DCL COPY command or the DCL RENAME command. This can also happen if you tried to use an old root file. User Action: Restore your database from last backup and roll forward your transactions using the appropriate AIJ file.
40.668 – INVALLOC
Unable to allocate tape device <str>. Explanation: An error was encountered allocating a tape. User Action: Examine the secondary message or messages. Correct the error and try again.
40.669 – INVALREQ
RMU command is invalid - type HELP for information Explanation: A command was specified which is not known to the process. User Action: Type HELP RMU and verify command is valid.
40.670 – INVAMBIG
invalid or ambiguous qualifier "<str>" Explanation: A qualifier is incorrect, misspelled, or abbreviated to the point of making it ambiguous. User Action: Use HELP or the documentation to determine the desired qualifier, and try again.
40.671 – INVBACFIL
<str> is not a valid backup file
40.672 – INVBLKHDR
invalid block header in backup file Explanation: The header of a block of the backup file does not have a valid format. User Action: None.
40.673 – INVBLKSIZ
invalid block size in backup file Explanation: The size of a block in the backup file conflicts with the size specified when the file was written. User Action: None.
40.674 – INVCTXHNDL
invalid context handle specified. Explanation: The context handle specified is not valid. User Action: This error should only occur through a programming error. Correct the error, and try the request again.
40.675 – INVDBBFIL
invalid backup file <str> Explanation: The specified file is not a valid database backup file.
40.676 – INVDBHNDL
invalid database handle specified. Explanation: The database handle specified in a statistics API request is not valid. User Action: This error should only occur through a programming error. Correct the error, and try the request again.
40.677 – INVDBK
<num>:<num>:<num> is not a valid dbkey Explanation: An attempt was made to fetch a record by its database key value, but the specified page is a SPAM, ABM, or AIP. Alternatively, the specified dbkey refers to a non-existing storage area or a system record. User Action: Correct the condition, and try again.
40.678 – INVDBSFIL
inconsistent storage area file <str> Explanation: The indicated storage area file is inconsistent with the root file. This might happen if you have improperly used any unsupported methods for backing up or restoring files (for instance, COPY or RENAME). This can also happen if you tried to use an old root file -- one whose storage area file names have been re-used for another database. User Action: Restore the correct storage-area file or delete the obsolete root file.
40.679 – INVDEFRUJ
Default RUJ filename "<str>" does not contain a valid device/directory Explanation: The default recovery-unit journal filename specified in the root file does not contain a valid device and/or directory. This may be detected during an RMU Convert of a database. In order for the conversion to complete it is necessary for RMU to clear the default recovery-unit journal filename field in the root file. User Action: After the conversion has completed, use the SQL ALTER DATABASE statement to set a new recovery-unit journal filename.
40.680 – INVDELTEX
Invalid delimited text specification. Explanation: The delimited text specification must be sufficient to identify the individual fields of each record. User Action: Reissue the command with either suffix not null or the separator and the terminator not null.
40.681 – INVDEPOS
deposit not allowed to that field Explanation: An attempt was made to deposit to an aggregate field; for example, the entire page "*", or the page header. User Action: This is not allowed. RMU ALTER DEPOSIT statements are only allowed for simple fields; for example, FREE_SPACE or LINE 1 LENGTH.
40.682 – INVDEVTYP
invalid backup device type <str> Explanation: The requested list of backup devices contains a device that either is not supported or is incompatible with the others. User Action: Correct and reissue the command.
40.683 – INVDISPL
unable to display numerics in lengths other than 1, 2, and 4 Explanation: An RMU ALTER DISPLAY DATA statement specified a length other than BYTE, WORD, or LONGWORD. User Action: Correct the error and try again.
40.684 – INVEXECSTART
Invalid invocation of an executor process. Explanation: An executor process was started incorrectly. User Action: Do not attempt to start an RMU parallel executor process by hand. If this message occurred when an executor was started automatically, contact your Oracle support representative for assistance.
40.685 – INVEXECVERSION
Invalid executor process version. Explanation: The version of the executor process is incompatible with the version of the RMU process that initiated the parallel operation. User Action: Reinstall Oracle Rdb. If problems still persist, contact your Oracle support representative for assistance.
40.686 – INVFILATR
possibly use SET FILE/ATTRIBUTE=(RFM:FIX,LRL:32256) on this backup file
40.687 – INVFILEXT
invalid file extension linkage in <str> Explanation: The relative volume number of the sequential disk file does not match the expected backup volume number. User Action: Take care to mount the sequential disk volumes in the correct order.
40.688 – INVFILLRL
possibly use SET FILE/ATTRIBUTE=(LRL:<num>) on this backup file
40.689 – INVFUNCCODE
Invalid function code specified. Explanation: An invalid function code was passed to the RMU shareable image. User Action: Contact your Oracle support representative for assistance.
40.690 – INVHANDLE
Network error: Invalid handle. Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.691 – INVHEADER
invalid file header record Explanation: An invalid file header record was read from the file. User Action: Check the file specification and try again.
40.692 – INVIDLEN
length of <str> "<str>" (<num>) is outside valid range (<num>..<num>) Explanation: The supplied name for the specified object is outside the valid range for the object. User Action: Respecify the RMU command using a name with a length within the required range.
40.693 – INVJOBID
Network error: Invalid job ID. Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.694 – INVLAREANAM
An invalid logical area name parameter has been specified Explanation: An invalid logical area name has been specified. User Action: Correct the error and try again.
40.695 – INVLINNUM
<str>, page <num>, line <num> Line number for pointer segment is bad. Explanation: The dbkey contains a line number that is invalid. Either the line is not in use or the line number is greater than the number of lines on the page. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement, and verify the database again.
40.696 – INVMARKOP
invalid markpoint commit/rollback for TSN <num>:<num>, MARK_SEQNO <num> Explanation: A fatal, unexpected error was detected by the database management system during the commit or rollback of a markpoint. This message indicates the "transaction sequence number" of the transaction and the "markpoint sequence number" of the markpoint that has to be committed or rolled back. User Action: Contact your Oracle support representative for assistance.
40.697 – INVMOUNT
Unable to mount tape device <str>. Explanation: An error was encountered mounting a tape. User Action: Examine the secondary message or messages. Correct the error and try again.
40.698 – INVNTFYCLS
Invalid operator notification class specified Explanation: The operater class specified with the notify qualifier is not valid for this platform. All operator classes are valid on OpenVMS platforms while only the console operator class is valid on other platforms. User Action: Correct the error and try again.
40.699 – INVOBJNAME
"<str>" contains a character which may be handled incorrectly. Explanation: The object name contains a character that may not be handled correctly. User Action: Specify SQL or ANSI_SQL with the Language qualifier and FULL with the Options qualifier.
40.700 – INVOPTION
<str> is an invalid option Explanation: The option specified for the Option qualifier is incorrect or misspelled. The valid options are Normal, Full, and Debug. User Action: Correct the error and try again.
40.701 – INVPAGAREA
Aborting command - invalid page <num> in area <str> Explanation: An invalid page has been detected in the named storage area causing the execution of the current command to be aborted. User Action: Run RMU/VERIFY to get more information on the corrupt page. The database administrator should create a replacement storage area and use ALTER or DROP commands to move other tables and indices out of the affected storage area. Then use DROP STORAGE AREA to remove the unused area. Alternatively, you can use the SQL EXPORT DATABASE and IMPORT DATABASE commands to rebuild the whole database. Then execute the command that was aborted.
40.702 – INVPAGPRM
allocation parameter <num> overflows when rounded Explanation: the allocation parameter selected, though it may be an allowed value, becomes illegal when rounded to make an even number of pages in the storage area; i.e., the number of pages to actually be allocated is always a multiple of the number of pages per buffer User Action: select a smaller allocation parameter.
40.703 – INVPROCEED
Procedure <str> is invalid due to database changes. Explanation: This stored procedure has been invalidated due to changes in database changes referenced by the procedure. User Action: Alter the procedure definition so that it can access the database correctly.
40.704 – INVQSTR
Invalid quoted string (<str>). Explanation: A quoted string value is not delimited by quotes. User Action: Provide a quoted string delimited by double quotes.
40.705 – INVRECEXP
Error expanding compressed backup file record. Explanation: An error occurred while expanding a compressed record in the backup file. User Action: Backup the database without compression enabled. Submit a bug report.
40.706 – INVRECSIZ
invalid record size in backup file Explanation: The size of a record in the backup file conflicts with the blocking of records in the file. User Action: None.
40.707 – INVRECTYP
invalid record type in backup file Explanation: The backup file contains unsupported types of data records. User Action: None.
40.708 – INVRELID
invalid relation id at dbkey <num>:<num>:<num> expected relation id <num>, found <num> Explanation: The page contains a record with an invalid relation id. User Action: None.
40.709 – INVRENAMETO
An invalid RENAME_TO logical area name has been specified Explanation: An invalid RENAME_TO name has been specified. User Action: Correct the error and try again.
40.710 – INVREQHDR
invalid statistics request header. Explanation: The statistics header structure passed in a call to rmust_database_info was corrupted, never initialized, or contains an illegal request code. User Action: Correct the user program and try the request again.
40.711 – INVRNG
invalid range, start greater than end, START=<num>, END=<num> Explanation: In a MOVE statement, the starting page offset was greater than the ending page offset. User Action: Correct the error and try again.
40.712 – INVSETBIT
maximum set bit index <num> on ABM page <num> exceeds the total of <num> SPAM page(s) for area <str> Explanation: The maximum set bit field of the ABM page is probably corrupt. Because every bit in the ABM bitvector points to a SPAM page, the maximum set bit cannot exceed the total number of SPAM pages in the area. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.713 – INVSTR
strings are only allowed in DATA and TIME_STAMP statements Explanation: A string was specified on a statement other than a DATA or TIME_STAMP statement. User Action: You can only specify strings on DATA and TIME_STAMP statements.
40.714 – INVSTRUCT
invalid structure level in backup file Explanation: The structure level version specified in a block header in the database backup file indicates either that the backup file was created with a format that is incompatible with this release of Oracle Rdb RMU or that the backup file is corrupt, or that there has been an error reading the backup file. User Action: Dump the backup file using RMU/DUMP/BACKUP to see if there are any errors reading the backup file. Repeat the RMU/RESTORE command after moving the backup file to a different device to see if the problem is related to a particular device. If the problem still occurs repeat the backup to create a new backup file and then retry the restore. If the problem still occurs contact Oracle Rdb support.
40.715 – INVTAPENAM
Tape devices <str> must end with a colon (:) character. Explanation: A tape device name was specified without being terminated with the colon (:) character. User Action: Correct and reissue the command.
40.716 – INVVRPNDX
Invalid vertical partition number in <num>:<num>:<num>. Entry <num> is for partition <num> and partition count is <num>. Explanation: The primary partition has an array of dbkeys for every vertical partition of the record. Each dbkey is tagged with the number of the partition it represents. This error indicates that the tag for the specified entry in the array was for a partition number which is larger than the number of partitions for the record. User Action: Restore and recover the page of the primary dbkey from backup.
40.717 – INVVRPUSE
Multiple references for vertical partition <num> in primary segment in dbkey <num>:<num>:<num>. Explanation: The primary partition has an array of dbkeys for every vertical partition of the record. Each dbkey is tagged with the number of the partition it represents. This error indicates that multiple dbkeys are tagged with the same partition number specified in the message. User Action: Restore and recover the page of the primary dbkey from backup.
40.718 – INV_ROOT
database file has illegal format Explanation: You attempted to use a file that is not a database file. User Action: Check the file specification and try again.
40.719 – IOCTLERR
Network error: Error on ioctl. Explanation: An error was encountered on the Digital UNIX ioctl system service call. User Action: Contact your Oracle support representative for assistance.
40.720 – IVCHAN
invalid or unknown I/O channel Explanation: The channel number cannot be located in the database information. User Action: Contact your Oracle support representative for assistance.
40.721 – IVORDER
order of ACEs being modified is incorrect for object <str> Explanation: The ACEs that are to be replaced can not be found in the ACL in the specified order. User Action: Correct the command and try again.
40.722 – JOB_DONE
Network job has completed Explanation: The executor has completed the requested job. User Action: None.
40.723 – LABELERR
error in tape label processing on <str> Explanation: An error was encountered in the ANSI tape label processing. User Action: If you are attempting an RMU RESTORE operation, mount the correct tape. If you are attempting an RMU BACKUP operation, reinitialize this tape.
40.724 – LAREANAMEDIFF
Logical area "<str>" name must match name of other specified logical areas Explanation: For the specifed options all selected logical areas must have the same name. User Action: Correct the error and try again.
40.725 – LAREAONLYWILD
Logical area name "<str>" contains only wildcard characters Explanation: The logical area name cannot contain only wildcard characters. User Action: Correct the error and try again.
40.726 – LAREATYPEDIFF
Logical area "<str>" type must match type of other specified logical areas Explanation: For the specifed options all selected logical areas must be of the same record type. User Action: Correct the error and try again.
40.727 – LASTCMTSNINV
Last Committed TSN (<num>:<num>) is higher than the TSN that will be assigned next (<num>:<num>). Explanation: The Last Commit TSN is higher than the KODA sequence number that will assigned as the next TSN. This indicates that the root file is not consistent.
40.728 – LCKCNFLCT
lock conflict on <str> Explanation: The operation you attempted failed because another run unit is holding a lock in a mode that conflicts with the lock mode you needed. User Action: Wait for the other run unit to finish. Use ROLLBACK or COMMIT to release all your locks and retry the transaction, or specify that you want to wait on lock conflicts.
40.729 – LCNTZERO
line index entry count is zero - this is invalid Explanation: An RMU ALTER DISPLAY or DEPOSIT LINE m command has been issued for a page with no lines. User Action: This is an invalid page; each mixed format page must contain at least a SYSTEM record. Create a SYSTEM record for the page using DEPOSIT DATA commands.
40.730 – LCSNOOUT
AIJ Log Catch-Up Server does not have an output file Explanation: The AIJ Log Catch-Up Server process does not have an output file associated with it. User Action: Use the /OUTPUT qualifier to specify an output filename when database replication is started on the master database.
40.731 – LDXOFFPAG
<str>, page <num> Line index is larger than free space on the page. Explanation: The page contains a line index that is larger than the free space on the page. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement, and verify the database again.
40.732 – LIBINTERR
Internal error returned from the LIBRARIAN media manager. Explanation: This indicates an internal error returned from a call to the LIBRARIAN media management interface. User Action: Contact your Oracle support representative for assistance.
40.733 – LIKETOMATCH
LIKE operator converted to RDO MATCHING operator - possible inconsistent wildcard operators Explanation: RDO uses the MATCHING operator which uses different wildcard characters than the SQL LIKE operator. There may be possible differences in execution. User Action: Examine the usage of the object, and change wildcards to be those of RDO.
40.734 – LINTOOLONG
Line in record definition file exceeds 1024 characters in length Explanation: A line from the record definition file exceeds the maximum length. User Action: Edit the file to fix the problem.
40.735 – LMCBRETERR
Callback routine "<str><str>" returned status <num>
40.736 – LMCRAFTCB
CALLBACK_ROUTINE must follow CALLBACK_MODULE at line <num>: "<str>"
40.737 – LMCRORCBREQ
OUTPUT or CALLBACK_MODULE/CALLBACK_ROUTINE must be specified at line <num>: "<str>"
40.738 – LMCTLANDRRD
Cannot specifiy both CONTROL and RECORD_DEFINITION at line <num>: "<str>"
40.739 – LMCTLREQTXT
CONTROL table option requires TEXT format at line <num>: "<str>"
40.740 – LMMFCHK
Metadata file "<str>" incorrect format
40.741 – LMMFRDCNT
Read <num> objects from metadata file "<str>"
40.742 – LMMFVER
Metadata file "<str>" version <num>.<num> expected <num>.<num>
40.743 – LMMFWRTCNT
Wrote <num> objects to metadata file "<str>"
40.744 – LMNOENABLED
LogMiner has not yet been enabled Explanation: The LogMiner feature has not been enabled on this database. User Action: If LogMiner features are enabled, LogMiner should be enabled.
40.745 – LMOPTNOTBL
No "TABLE=" at line <num>: "<str>"
40.746 – LMOPTOORCB
Only OUTPUT or CALLBACK_MODULE/CALLBACK_ROUTINE allowed at line <num>: "<str>"
40.747 – LNGTRLNDX
line <num> beyond line index on page Explanation: An attempt was made to fetch a line on a page. The fetch failed because the line index on that page has no entry for the requested line. The problem is most likely caused by an invalid or corrupt pointer in an index. User Action: Rebuild the index if the line is referenced from an index.
40.748 – LOADTEMPTAB
Data cannot be loaded to a temporary table. Explanation: A temporary table cannot be specified for the RMU Load command. User Action: Check that the table is not defined in the database as a global or local temporary table. The table must be defined as a non-temporary table to be able to load the table's data.
40.749 – LOATXT_10
<str>
40.750 – LOATXT_8
Start <num> executor(s) for GSD "<str>" on Database "<str>".
40.751 – LOATXT_9
Message from <str>:
40.752 – LOCKACL
Error locking or unlocking root ACL Explanation: The lock (unlock) operation on the root file failed. The reason for the failure is given in the secondary error message. User Action: Correct the source of the failure and try again.
40.753 – LOGADDCCH
added record cache definition "<str>" Explanation: A new record cache definition has been successfully added the the database.
40.754 – LOGAIJBCK
backed up <num> <str> transaction(s) at <time> Explanation: The specified number of committed or rolled-back transactions were successfully backed up from the after-image journal file.
40.755 – LOGAIJBLK
backed up <num> after-image journal block(s) at <time> Explanation: The specified number of blocks were successfully backed up from the after-image journal file. User Action: No user action is necessary.
40.756 – LOGAIJJRN
backed up <num> after-image journal(s) at <time> Explanation: The specified number of after-image journals were successfully backed up during the AIJ backup operation. User Action: No user action is necessary.
40.757 – LOGCOMPR
data compressed by !@UQ% (!@UQ <str> in/!@UQ <str> out) Explanation: User action:
40.758 – LOGCREOPT
created optimized after-image journal file <str> Explanation: This message indicates the action taken on a specific file. User Action: No user action is required.
40.759 – LOGDELAIJ
deleted temporary after-image journal file <str> Explanation: This message indicates the action taken on a specific file.
40.760 – LOGDELCCH
deleted record cache definition "<str>" Explanation: A record cache definition has been successfully deleted from the database.
40.761 – LOGMODCCH
modifying record cache definition "<str>" Explanation: The parameters for a record cache are in the process of being modified.
40.762 – LOGMODSPM
modified <num> spam page(s) Explanation: None.
40.763 – LOGMODSTO
modifying storage area <str> Explanation: The parameters for a storage area are in the process of being modified.
40.764 – LOGRECOVR
<num> transaction(s) <str> Explanation: Database recovery was successful. The specified number of transactions were re-applied to the database, ignored, or rolled back. User Action: None.
40.765 – LOGRECSTAT
transaction with TSN <num>:<num> <str> Explanation: During roll-forward, this message is displayed every time a transaction is committed, rolled back, or ignored.
40.766 – LOGRESOLVE
blocked transaction with TSN <num>:<num> <str> Explanation: When a blocked transaction is resolved, this message is displayed every time the transaction is committed, rolled back, or ignored.
40.767 – LOGSUMMARY
total <num> transaction(s) <str> Explanation: Database recovery was successful. The specified total number of transactions were re-applied to the database, ignored, or rolled back. User Action: No user action is required.
40.768 – LOOKUP
error searching for file <str> Explanation: An error occurred during an attempt to find the indicated file. User Action: Examine the secondary message or messages. Correct the error and try again.
40.769 – LRSABORTED
AIJ Log Roll-Forward Server terminated abnormally Explanation: The LRS process has failed. User Action: Examine the database monitor log file and any SYS$SYSTEM:*LRSBUG.DMP bugcheck dump files for more information.
40.770 – LRSNOOUT
AIJ Log Roll-Forward Server does not have an output file Explanation: The AIJ Log Roll-Forward Server process does not have an output file associated with it. User Action: Use the /OUTPUT qualifier to specify an output filename when database replication is started on the standby database.
40.771 – LRSSHUTDN
AIJ Log Roll-Forward Server being shutdown Explanation: The LRS process is in final phase of being shutdown. User Action: Try starting Hot Standby after the AIJ Log Roll-Forward server has completely stopped execution.
40.772 – LSSMAXFLT
The float value is smaller than minimum allowed. Explanation: The float value specified for the named option is too small. User Action: Use a float value that is greater than the minimum value and try again.
40.773 – LVAREASNP
FILID entry <num> is a snapshot area whose live area is a snapshot area. Explanation: The FILID entry for each snapshot area contains a pointer to the FILID entry for the live area associated with the snapshot area. The live area associated with the named snapshot area is not a live area. User Action: Restore and recover the database from backup.
40.774 – MATCHTOLIKE
MATCHING operator converted to SQL LIKE operator - possible inconsistent wildcard operators Explanation: SQL uses the LIKE operator which uses different wildcard characters than the RDO MATCHING operator. There may be possible differences in execution. User Action: Examine the usage of the object, and change wildcards to be those of SQL.
40.775 – MAXGTRSIZ
ending page <num> greater than last page <num> of area last page in area used as ending page Explanation: The upper limit of a page range is greater than the maximum page number in the area. Therefore, the ending page is taken as the last page in the database area. User Action: There is no error here. There could have been a typographic error in the command line.
40.776 – MAXVOLS
too many volumes in volume set <str> Explanation: The volume set is larger than the maximum supported (999). User Action: A backup using more tapes is not supported. Use "by area" backups as an alternative.
40.777 – MBZFREESP
area <num>, page <num> should contain <num> byte(s) of free space starting at offset <num> Explanation: The page in the indicated area contains a free space that should be zero, but is not. User Action: Correct the error with the RMU Restore command and verify the database again.
40.778 – MBZFRESPC
page <num> should contain <num> byte(s) of free space starting at offset <num> Explanation: The page contains a free space that should be zero, but is not. User Action: Correct the error with the RMU Restore command and verify the database again.
40.779 – MFDBONLY
operation is not allowed on single-file databases Explanation: An attempt was made to modify a single-file database in such a way that the root file would need to be expanded. This type of change is not permitted for single-file databases. Examples of database modifications that cause this error include the following: reserving after-image journals or storage areas, adding or deleting storage areas, or changing the number of nodes or users. User Action: If you want to create a multifile database from a single-file database, use the EXPORT and IMPORT statements.
40.780 – MINGTRMAX
starting page <num> greater than ending page <num> area verification is skipped Explanation: The lower limit of the verification page range is greater than the higher limit of the page range. Therefore, no pages are verified in the storage area. User Action: Supply a valid page range, and try the verification again.
40.781 – MINGTRSIZ
starting page <num> greater than last page <num> of area area verification is skipped Explanation: The lower limit of a page range is greater than the maximum page number in the area. Therefore, no page in the area will be verified. User Action: Supply a valid page range, and try the verification again.
40.782 – MISMMORDD
missing either month or day field in date string Explanation: The date string must contain the month and day field. Otherwise, it will not be converted to the DATE data type. User Action: Re-enter the DATE data item and specify both the month and day fields.
40.783 – MISSINGPARAM
Network error: Required parameter missing. Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.784 – MISSVAL
required value missing for <str> Explanation: The indicated parameter requires a value. User Action: Specify a value for the named parameter and try again.
40.785 – MONFLRMSG
failure message received from the monitor Explanation: An error happened in the monitor process. The user process received the error message. User Action: Examine the monitor log on the node where the user program was running to see messages relating to the monitor error.
40.786 – MONITOR_SYNC
Could not synchronize with database monitor Explanation: The server needs to synchronize with a database monitor but it cannot. User Action: Make sure that the database monitor on the server system is running successfully. If appropriate, have the database monitor and server restarted.
40.787 – MONMBXOPN
monitor is already running Explanation: The monitor has already been started by another user. User Action: No action is required.
40.788 – MONSTOPPED
database monitor process is being shut down Explanation: The request you made could not be completed because the database monitor process is being shut down. User Action: Examine the database monitor log file (SYS$SYSTEM:*MON.LOG) for more information.
40.789 – MOUNTFOR
<str> must be mounted foreign Explanation: The device on which the backup file resides was not mounted as a foreign volume. User Action: Mount the device as a foreign volume and reissue the command.
40.790 – MTDSUPPORT
The specified density cannot be translated to an equivalent multiple tape density value Explanation: A multiple tape density value has not been specified for this tape device even though multiple tape density values are supported for this device. Since the specified tape density value cannot be translated to an equivalent multiple tape density value, a multiple tape density value will not be used for this device. User Action: Consider specifying a multiple tape density value for this device using the DATA_FORMAT qualifier instead of the DENSITY qualifier or specify a value with the DENSITY qualifier that can be translated to a multiple tape density value.
40.791 – MUSTBETXT
Only TEXT fields allowed for delimited text input Explanation: The .rrd file describes a field which is not of type TEXT. User Action: Create a new .rrd file with fields of type TEXT.
40.792 – MUSTRECDB
database must be closed or recovered Explanation: The operation can be done only on databases that are closed and recovered. User Action: Use the CLOSE command if an OPEN was performed. Recovery, if required, can be forced by opening the database. If other users are accessing the database, you must wait for them to finish.
40.793 – MUSTSPECIFY
Must specify <str> Explanation: You have entered a command without a required option. User Action: Reissue command, specifying the missing options.
40.794 – NAMTRUNC
File name truncated, <str> Explanation: The names of files that are placed on tape are limited to a maximum of 17 characters. User Action: Use a file name of 17 characters or less.
40.795 – NDXNOTVER
error getting index information from system relations, indices will not be verified, continuing database verification Explanation: During an attempt to read information about the indexes from the the system tables, errors were encountered that makes it impossible to verify the indexes. User Action: Rebuild the indexes and verify the database again.
40.796 – NEEDLAREA
A logical area name or logical area id must be specified Explanation: This option requires a logical area name or logical area id. It cannot be implemented for all logical areas. User Action: Correct the error and try again.
40.797 – NEEDSNOEXTEND
area <str> already has <num> page(s) Explanation: The extension of the specified storage area was trivial, as the number of pages before was exactly same. User Action: Check page count of the current area allocation. (If it should differ)
40.798 – NETACCERR
error <str> <str> network <str> Explanation: A network-access error occurred. User Action: Examine the secondary message for more information.
40.799 – NETERROR
Network error: <str> Explanation: A network related error has occurred. User Action: None
40.800 – NOACTION
This command requires an action to be specified Explanation: No action parameters are specified for this command. User Action: Correct the error and try again.
40.801 – NOAIJDEF
no default after-image filename available Explanation: A default after-image journal file name cannot be formed, because no journal file name is presently in the database root. User Action: Enable after-image journaling, and supply a name for the after-image journal.
40.802 – NOAIJENB
cannot enable after-image journaling without any AIJ journals Explanation: An attempt was made to enable AIJ journaling although no AIJ journals exist. User Action: Create one or more AIJ journals BEFORE enabling AIJ journaling.
40.803 – NOAIJREM
cannot remove AIJ journal without disabling journaling first Explanation: An attempt was made to remove the last AIJ journal. The last AIJ journal may be removed only IF AIJ journaling has been previously disabled. User Action: Disable AIJ journaling first.
40.804 – NOAIJSERVER
database replication server is not running or running on other node Explanation: The database replication server process is not running on the standby node specified, or has abnormally terminated. There may also be a problem with the mailbox used to communicate with the database replication server. User Action: Check the system to determine whether or not the database replication server process is actually running on your system. Check the use of cluster aliases, as the replication connection may have been attempted on another node of the designated cluster. If the database replication server process does not appear to be running, have your database administrator start the replication server, and try again. If the database replication server process appears to be running properly, then the problem may be related to the mailbox by which user processes communicate with the replication server process. Make sure the "server name" specified for both the live and standby databases logical are unique and identical. On VMS platforms, the "server name" is used to create a logical of the same name that resides in a logical name table accessible to all users, typically the LNM$PERMANENT_MAILBOX name table. If the replication server process abnormally terminated, a bugcheck dump will normally be produced. Search the bugcheck dump for a string of five asterisks (*****) using the SEARCH/WINDOW command. You will see a line with a format similar to this: ***** Exception at <address> : <database module name> + <offset> %facility-severity-text, <error text> The exception line will be followed by one or more additional errors that will help you to determine what caused the replication server process to fail. Typically, the problem is caused by insufficient quotas or system resources. However, other possible causes include misdefined or undefined filename logicals. Depending on the cause of the problem, take the appropriate action. If you are uncertain of what to do, contact your Oracle support representative for assistance.
40.805 – NOAIJSLOTS
no more after-image journal slots are available Explanation: The number of after-image journals that can be created is "reserved" in advance. An attempt has been made to create more journals than the number reserved. User Action: Either remove an existing AIJ file or reserve more AIJ slots before creating additional journals.
40.806 – NOAREAMATCH
No storage areas were found on <str> Explanation: No storage areas were found on the specified disk or directory. User Action: Verify the location of the storage areas, and enter the command again.
40.807 – NOAREASLOTS
no more storage area slots are available Explanation: The number of storage areas that can be created is "reserved" in advance. An attempt has been made to create more storage areas than the number reserved. User Action: Either delete an existing storage area, or reserve more storage area slots before creating the area(s).
40.808 – NOAREASMOVED
No areas were moved. Explanation: The RMU Move_Area command did not specify any areas to be moved. User Action: Specify the areas that need to be moved using either an explicit parameter, an options file, the All_Areas qualifier or the Root qualifier.
40.809 – NOASCLEN
ASCII length not allowed on DEPOSIT Explanation: The RMU ALTER DEPOSIT command does not allow the ASCII length to be specified. User Action: Issue an RMU ALTER DEPOSIT command without the ASCII length specified.
40.810 – NOATTACH
command not allowed - not currently attached to a database Explanation: A command has been issued which requires that the user be attached to a database. User Action: Issue an RMU ALTER ATTACH command to attach (bind) to a database.
40.811 – NOAUDITSERVER
VMS AUDIT_SERVER process is not running on this system Explanation: You attempted to generate a database audit record; however, the VMS AUDIT_SERVER process is not running on this system. User Action: Restart the VMS AUDIT_SERVER process.
40.812 – NOAUTOREC
Initialization failure for automatic AIJ recovery Explanation: This configuration can not perform automatic recovery from the After Image Journals known to the database. User Action: Determine the cause for the failure from the other messages provided and correct the problem. Manual recovery may be successfully substituted.
40.813 – NOBTRNODE
B-tree node not found at dbkey <num>:<num>:<num> Explanation: A B-tree index node was expected at the given dbkey, but was not found. The pointer to the duplicate node in the B-tree is probably corrupt. User Action: Ascertain if the index is corrupt by manually verifying related index nodes after dumping pages of the database. If the index is corrupt, rebuild it.
40.814 – NOCCHSLOTS
no more record cache slots are available Explanation: The number of record caches that can be added is "reserved" in advance. An attempt has been made to add more record caches than the number reserved. User Action: Either delete an existing record cache definition or reserve more slots before creating the caches(s).
40.815 – NOCEGTRRC
For segmented strings COMMIT_EVERY must be a multiple of ROW_COUNT, setting ROW_COUNT equal to COMMIT_EVERY value of <num>. Explanation: Data containing segmented strings cannot be loaded if the value specified for COMMIT_EVERY exceeds the ROW_COUNT value and the value specified for COMMIT_EVERY is not a multiple of the ROW_COUNT value. ROW_COUNT is set equal to the value of COMMIT_EVERY and the load continues. User Action: If the table being loaded contains segmented string fields and the value of COMMIT_EVERY is greater than the value of ROW_COUNT specify a value for COMMIT_EVERY that is a multiple of the value of ROW_COUNT.
40.816 – NOCHAR
no character after '' in pattern Explanation: A MATCH operation was in progress and the pattern was exhausted with the pattern quote character as the last character in the pattern. User Action: Rewrite the expression in error to have the proper format.
40.817 – NOCHARSET
The character set of <str>.<str> is <str>. It may be ignored. Explanation: Neither RDO nor SQL89 has any syntax for specifying a character set of a field. When the character set of a field is different from a database default character set, it will be recognized as the same as a database default character set. User Action: Specify SQL or ANSI_SQL with the Language qualifier and specify the Options=Full qualifier.
40.818 – NOCLSAREA
attempted to verify a hash index in a storage area that is not of mixed area type Hash index is <str> and storage area is <str> Explanation: The hash index must be stored in a storage area of mixed type. Probably the mixed area flag in the FILID is not set, but should be. User Action: Rebuild the index.
40.819 – NOCOMBAC
No full and complete backup was ever performed Explanation: The database root file can only be restored from a full and complete (all storage areas included) backup. Without such a backup, error conditions may occur that are unrecoverable. User Action: Perform a full and complete backup of the database.
40.820 – NOCOMMAND
no RMU command specified Explanation: You entered RMU without specifying a command option. User Action: Enter command with proper command option.
40.821 – NOCOMPACTION
This tape device does not accept compaction mode - compaction is ignored Explanation: This tape device does not accept the tape compaction mode. The specified compaction mode is ignored for this device. User Action: None - the specified tape compaction mode is ignored.
40.822 – NOCOMPRESSION
RUN LENGTH COMPRESSION for index <str> cannot be defined using RDO - ignored Explanation: RUN LENGTH COMPRESSION for an index cannot be represented using RDO. User Action: Use the Language=SQL qualifier of the RMU Extract command to specify RUN LENGTH COMPRESSION for an index.
40.823 – NOCONFIG
Unable to load configuration file. Explanation: An error occurred while loading the Oracle Rdb configuration file. User Action: Check that Oracle Rdb has been installed correctly.
40.824 – NOCONFIGFILE
Unable to load configuration file. Explanation: An error occurred during loading of the user's configuration file. User Action: Check that the DBSINIT environment variable has been set correctly.
40.825 – NOCREMBX
can't create mailbox Explanation: An error occurred when you attempted to create a mailbox. See the secondary message for more information. User Action: Correct the condition and try again.
40.826 – NOCURPAG
there is no current page - use the PAGE or DISPLAY command Explanation: No page has been established as current since the last ROLLBACK. User Action: Use the RMU ALTER PAGE or DISPLAY commands to establish a current page.
40.827 – NOCVTCOM
Database <str> is already at the current structure level. Explanation: The convert operation cannot be committed. Either the convert operation was already committed or the database was created using the current version of Oracle Rdb. User Action: Use the RMU Show Version command to verify that the correct version of RMU is executing. Use the RMU Verify command to determine if the database has already been converted.
40.828 – NOCVTDB
Database <str> is already at the current version and cannot be converted. Explanation: The database is already at the current version. The database will not be converted. User Action: Use the RMU Show Version command to verify that the correct version of RMU is executing.
40.829 – NOCVTROL
ROLLBACK of CONVERT not possible for <str> Explanation: The convert operation cannot be rolled back. Either no conversion was performed, or the convert operation was already committed or rolled back. Most commonly, this indicates that the database has already been converted or that the wrong version of RMU has been executed. User Action: Use the RMU Show Version command to verify that the correct version of RMU is executing. Use the RMU Verify command to determine if the database has already been converted.
40.830 – NODATANDX
no data records in index <str> Explanation: A null dbkey was found as the root dbkey of a B-tree index. It is assumed that there are no records in the relation, and hence, the root dbkey is null. User Action: If you know there are records in the relation, the index is corrupt, and you may rebuild the index. If there are no records in the index, however, this is not an error condition.
40.831 – NODATPNDX
partitioned index <str> in area <str> is empty Explanation: A null dbkey was found as the root dbkey of this partition of a B-tree index. It is assumed that there are no records in this partition, and hence, the root dbkey is null. User Action: If you know there are records in this partition, the index is corrupt, and you can rebuild the index. If there are no records in the index, however, this is not an error condition.
40.832 – NODBK
<num>:<num>:<num> does not point to a data record Explanation: An attempt was made to fetch a record by its database-key value, but the record has been deleted. User Action: Correct the condition and try again.
40.833 – NODELOOKUPERR
Network error: Error looking for node name. Explanation: Named node cannot be found. User Action: Check the following: - the network is running - the node is accessible from your network - the node name is spelled correctly Correct any error found and retry. If problem persists, contact your Oracle support representative for assistance.
40.834 – NODEVDIR
filename does not include device and directory Explanation: The file you specified did not include a device and directory. User Action: For maximum protection, you should always include a device and directory in the file specification, preferably one that is different from the database device.
40.835 – NODUPHBKT
duplicate hash bucket not found at dbkey <num>:<num>:<num> Explanation: A duplicate hash bucket was expected at the given dbkey, but was not found. The pointer to the duplicate hash bucket from a primary hash bucket is probably corrupt. User Action: Ascertain if the index is corrupt by manually verifying related system records and hash buckets after dumping pages of the database. If the index is corrupt, rebuild it.
40.836 – NOENTRPT
No entry point found for external routine <str>. Image name is <str>. Entry point is <str>. Explanation: The entry point for an external routine was not found in the image that is supposed to contain the entry point. User Action: Check the image name and the entry point name for the external routine.
40.837 – NOEUACCESS
unable to acquire exclusive access to database Explanation: Exclusive access to the database was not possible. Therefore, the requested database operation was not performed. User Action: Try again later.
40.838 – NOEXTCUR
cannot extract, AIJ file <str> is the current AIJ Explanation: The current AIJ file cannot be extracted. User Action: Use a backup AIJ file.
40.839 – NOEXTLMNOENA
cannot extract - AIJ file <str> does not have LogMiner enabled Explanation: This AIJ file came from a database that did not have LogMiner enabled at the time the AIJ file was created. User Action: No user action is required. This AIJ file cannot be extracted.
40.840 – NOEXTNOQUIET
cannot extract - AIJ file <str> backed up via a no-quiet-point backup Explanation: An AIJ file, which was backed up with a no-quiet-point backup, cannot beextracted, because a no-quiet-point backup can leave incomplete transactions in an AIJ file. AIJ extraction cannot handle incomplete transactions within an AIJ file. User Action: No user action. This AIJ file cannot be extracted.
40.841 – NOEXTOPT
cannot extract - AIJ file <str> is optimized Explanation: An optimized AIJ file cannot be extracted. User Action: No user action is required. This AIJ file cannot be extracted.
40.842 – NOEXTPRVNOQUIET
cannot extract - AIJ file <str> had its previous AIJ file backed up via a no-quiet-point backup Explanation: An AIJ file for which the previous AIJ file was backed up with a no-quiet-point backup, cannot be extract. A no-quiet-point backup can leave incomplete transactions in an AIJ file, and AIJ extraction cannot handle incomplete transactions within an AIJ file. User Action: No user action is required. This AIJ file cannot be extracted.
40.843 – NOEXTUNRES
cannot extract, AIJ file <str> has unresolved transactions Explanation: The AIJ file being extracted has unresolved distributed transactions. AIJ extraction cannot handle unresolved transactions, so it must abort. User Action: Use a complete AIJ file with no unresolved distributed transactions.
40.844 – NOFIXCSM
Checksum on corrupt page <num> was not fixed. Explanation: The page that you are marking consistent had an invalid checksum that was not corrected. User Action: Use the RMU Restore command to restore the page to a usable state.
40.845 – NOFULLBCK
no full backup of this database exists Explanation: An incremental backup of a database is not allowed if a full backup has never been made or if changes have been made to the database that require a full backup.
40.846 – NOHASHBKT
hash bucket not found at dbkey <num>:<num>:<num> Explanation: A hash bucket was expected at the given dbkey, but was not found. The pointer to the hash bucket from the system record is probably corrupt. User Action: Ascertain if the index is corrupt by manually verifying related system records and hash buckets after dumping pages of the database. If the index is corrupt, rebuild it.
40.847 – NOHIDDEN
Not allowed to modify hidden ACEs Explanation: Access to hidden ACEs requires OpenVMS SECURITY privilege. User Action: See your system manager.
40.848 – NOIDXSTAR
Index <str> will not be verified because it is not stored in a live area. Explanation: All indexes should be stored in a storage area that is an active area. The metadata for the index you are verifying incorrectly indicates that the index is either inactive or in snapshot area. User Action: Restore and recover the database from backup.
40.849 – NOIMAGE
Unable to invoke image <str>. Explanation: An error occurred during image invocation for an RMU command. User Action: Check that Oracle Rdb has been installed correctly.
40.850 – NOINSEGSPAM
no line index and no storage segments on a Space Mgmt. page Explanation: Space management pages are in a different format than data pages. The operation was trying to reference this space management page in data page format. User Action: Try the operation again using either the proper data page number or the space management page format, depending on the intended operation.
40.851 – NOINTEGRATE
Root and/or path name is too long to perform a CDD integration Explanation: The sum of the lengths of the root name and the path name must be less than 230 characters for this operation. User Action: Use SQL to perform the integration.
40.852 – NOIOCHAN
no more I/O channels available on system Explanation: The process has attempted to exceed the number of I/O channels that can be assigned at one time; this value is "per node". User Action: Check the VMS SYSGEN parameter CHANNELCNT to ensure that it is large enough to properly service the application.
40.853 – NOLAREAFOUND
No logical areas match the specified selection parameters Explanation: No logical areas have been found for the specified selection parameters. User Action: Correct the error and try again.
40.854 – NOLCKMGR
Unable to initialize lock manager. Explanation: An error occurred during initialization of the Oracle Rdb lock manager. User Action: Check that Oracle Rdb has been installed correctly.
40.855 – NOLIBRARIAN
Cannot locate LIBRARIAN image Explanation: An operation involving a tape librarian was unable to locate an implementation of the Oracle Media Manager interface. User Action: Make sure that a media manager image is installed and located at the specified location.
40.856 – NOLINE
line <num> is unused or locked Explanation: You attempted to display unused or locked lines. User Action: This is not allowed. Unused or locked lines cannot be displayed.
40.857 – NOLIST
list of parameter values not allowed - check use of comma (,) Explanation: More than one parameter was specified in a comma seperated list but the command only excepts one parameter. User Action: Remove the extra parameters.
40.858 – NOLOADVIR
Cannot load virtual field <str>. Explanation: Virtual fields cannot be loaded. User Action: Remove the name of the virtual field from the list of fields to be loaded by the RMU Load command.
40.859 – NOLOCKSOUT
no locks on this node with the specified qualifiers Explanation: No locks were found on the current node that match the specified command qualifiers. This usually indicates that either no monitors are active on this node, or no databases are currently being accessed on this node. User Action: If databases are active on the node, try using a less restrictive set of command-qualifiers.
40.860 – NOMEM
Network error: Insufficient virtual memory. Explanation: An operation exhausted the system pool of dynamic memory, and either the client or server process cannot allocate virtual memory. The system cannot complete the request. User Action: Free the resources you are holding, or increase the existing pool of memory. Take these actions first on the server system, and if the problem is not resolved, take these actions on the client system. Most likely, the problem is on the server system.
40.861 – NOMIXASC
do not mix data types with the ASCII switch Explanation: You attempted an RMU ALTER DEPOSIT operation with different, conflicting data types or radixes. User Action: This is not allowed. Issue the RMU ALTER DEPOSIT DATA command with compatible data types or radixes.
40.862 – NOMONHOMEDIR
monitor home directory is not valid
Explanation: The directory from which the monitor was invoked
is no longer valid. Typically, this occurs when the monitor is
invoked by the installation IVP procedure, which subsequently
deletes the invocation directory. However, this can also occur
during day-to-day operations whenever directories are deleted.
When the monitor home directory does not exist, the monitor is
unable to invoke other server processes, such as the database
recovery process ("DBR") or the AIJ Backup Server ("ABS") to
name a few. When the monitor home directory does not exist, the
server processes will be unable to create temporary work files.
User Action: Stop the monitor, and restart it from a valid
directory.
40.863 – NOMONITOR
database monitor is not running Explanation: The database monitor process is not running or has abnormally terminated. There may also be a problem with the mailbox used to communicate with the database monitor. User Action: Check the system to determine whether or not the database monitor process is actually running on your system. If the database monitor process does not appear to be running, have your database administrator start the monitor, and try again. If the database monitor process appears to be running properly, then the problem may be related to the mailbox by which user processes communicate with the monitor process. Make sure the logical <fac>$MAILBOX_CHANNEL resides in a logical name table accessible to all users, typically the LNM$PERMANENT_MAILBOX name table. If the monitor abnormally terminated, a bugcheck dump will normally be written to the monitor log. Search the monitor log for a string of five asterisks (*****) using the SEARCH/WINDOW command. You will see a line with a format similar to this: ***** Exception at <address> : <database module name> + <offset> %facility-severity-text, <error text> The exception line will be followed by one or more additional errors that will help you to determine what caused the monitor process to fail. Typically, the problem is caused by insufficient quotas or system resources. However, other possible causes include misdefined or undefined filename logicals. Depending on the cause of the problem, take the appropriate action. If you are uncertain of what to do, contact your Oracle support representative for assistance.
40.864 – NOMOREGB
<num> global buffers not available to bind; <num> free out of <num> Explanation: Your attempt to bind to the database failed because there are not enough global buffers to allow your process to bind to the database. User Action: There are four ways to alleviate this problem. 1)Try to bind to the same database on another node, if you are using a VAXcluster. 2)Wait untill another user unbinds from the database and retry the bind. 3)Increase the number of global buffers used for the database. 4)Decrease the maximum number of global buffers any one user can use, in order to allow more users to bind to the database. Please see your DBA for help.
40.865 – NOMPTONLINE
Online operation cannot be performed if page transfers via memory are enabled. Explanation: The backup, copy_database, or move_area facility is not able to handle modified database pages which have not been written to disk. This situation may arise if page transfers via memory are enabled for global buffers. User Action: Either disable page transfers via memory for the database or perform the operation off line.
40.866 – NOMSGFILE
Unable to load message file <str> Explanation: An error occurred while loading a message file. User Action: Check that Oracle Rdb has been installed correctly.
40.867 – NOMTDSUPPORT
The specified multiple tape density cannot be translated to an equivalent tape density value Explanation: A multiple tape density value has been specified for this tape device but multiple tape density values are not supported for this device. Since the specified tape multiple density value cannot be translated to an equivalent tape density value, the DEFAULT tape density will be used for this device. User Action: Consider specifying a non multiple tape density value for this device using the DENSITY qualifier instead of the DATA_FORMAT qualifier or specify a multiple tape density value with the DATA_FORMAT qualifier that can be translated to a non multiple tape density value.
40.868 – NONEGVAL
"<str>" qualifier is negated - value must not be supplied Explanation: You negated a qualifier and also supplied a value. This combination is inconsistent. User Action: Enter the command again, and do not specify a value for a negated qualifier.
40.869 – NONODE
no node name is allowed in the file specification Explanation: A node name was found in the file specification. Node names cannot be used. User Action: Use a file name without a node specification.
40.870 – NONODEPCL
no pointer clusters in a b-tree node Explanation: You cannot try to display a pointer cluster of a b-tree node using the CLUSTER clause. B-tree nodes do not contain pointer clusters. User Action: This is not a realistic RMU ALTER command. Use DISPLAY LINE to see the b-tree node.
40.871 – NOONLREC
No online RECOVERY possible Explanation: Online operation is not available when recovery requires altering the transaction state of the database as a whole. Online operation is available when the transaction states of areas marked inconsistent are to be advanced to the state of the database as a whole. User Action: Perform the operation using NOONLINE.
40.872 – NOOPTCMTJRNL
cannot optimize -- commit-to-journal optimization is enabled Explanation: The 'commit-to-journal' database parameter is enabled. When this parameter is enabled, AIJ optimization cannot be performed. User Action: Use the original, non-optimized AIJ file if needed for recovery. As an alternative, disable the commit-to-journal feature.
40.873 – NOOPTCUR
cannot optimize -- AIJ file <str> is the current AIJ file Explanation: The current AIJ file cannot be optimized, because the optimized AIJ file would not be equivalent to the current AIJ file if more journaling was done after optimization. User Action: Start a new AIJ file, and then optimize the AIJ file in question.
40.874 – NOOPTNOQUIET
cannot optimize -- AIJ file <str> backed up via a no-quiet-point backup Explanation: An AIJ file, which was backed up with a no-quiet-point backup, cannot be optimized, because a no-quiet-point backup can leave incomplete transactions in an AIJ file. AIJ optimization cannot handle incomplete transactions within an AIJ file. User Action: No user action. This AIJ file cannot be optimized.
40.875 – NOOPTOPT
cannot optimize -- AIJ file <str> is already optimized Explanation: An optimized AIJ file cannot be optimized again. User Action: No user action is required.
40.876 – NOOPTPRVNOQUIET
cannot optimize -- AIJ file <str> had its previous AIJ file backed up via a no-quiet-point backup Explanation: An AIJ file for which the previous AIJ file was backed up with a no-quiet-point backup, cannot be optimized. A no-quiet-point backup can leave incomplete transactions in an AIJ file, and AIJ optimization cannot handle incomplete transactions within an AIJ file. User Action: No user action is required. This AIJ file cannot be optimized.
40.877 – NOOPTUNRES
cannot optimize -- AIJ file <str> has unresolved transactions, Explanation: The AIJ file being optimized has unresolved distributed transactions. AIJ optimization cannot handle unresolved transactions, so it must abort. User Action: Use the original, non-optimized AIJ file if needed for recovery.
40.878 – NOPARAAUDIT
Cannot load audit records in parallel. Explanation: Parallel loading is not supported for the RMU Load command with the Audit qualifier. A non-parallel load of the audit information will occur. User Action: Do not specify the Parallel and Audit qualifiers in the same load operation.
40.879 – NOPARASSTR
Cannot load segmented strings in parallel. Explanation: Parallel loading is not supported for data containing segmented strings. A non-parallel load of the audit information will occur. User Action: Do not specify the Parallel qualifier when loading data files that contain segmented strings.
40.880 – NOPARLDBATCH
Only one executor can be specified for a BATCH_UPDATE transaction Explanation: An attempt to execute a parallel load operation was attempted with multiple execurors and the TRANSACTION_TYPE set to BATCH_UPDATE. User Action: Specify another transaction_type, do not do a parallel load operation, or only specify one executor.
40.881 – NOPATHNAME
no CDD pathname was available - using pathname <str> Explanation: No CDD Pathname was available for this database, so a pathname was derived from the file specification. User Action: None
40.882 – NOPRIOR
no PRIOR dbkey in this pointer cluster Explanation: An RMU ALTER DISPLAY or DEPOSIT LINE m CLUSTER n PRIOR [= <dbkey>] command was issued for a pointer cluster that does not contain PRIOR pointers. User Action: This is not allowed. PRIOR pointers cannot be displayed or altered in sets where they are not allowed.
40.883 – NOPRIV
no privilege for attempted operation Explanation: You attempted an operation that requires VMS privileges, and you do not have those privileges enabled. User Action: Examine the secondary message for more information.
40.884 – NOPRIVERR
no privileges for attempted operation Explanation: There are insufficient privileges for the operation to be performed. User Action: Get sufficient privileges to perform the operation.
40.885 – NORCGTRCE
For segmented strings ROW_COUNT cannot exceed COMMIT_EVERY, setting ROW_COUNT equal to COMMIT_EVERY value of <num>. Explanation: Data containing segmented strings cannot be loaded if the value specified for ROW_COUNT or the default ROW_COUNT value exceeds the value specified for COMMIT_EVERY. ROW_COUNT is set equal to the value of COMMIT_EVERY and the load continues. User Action: If the table being loaded contains segmented string fields specify a value of ROW_COUNT that is equal to or less than the value of COMMIT_EVERY.
40.886 – NORDOALL
the expression ALL cannot be represented using RDO Explanation: The expression using the ALL predicate is not supported in RDO. User Action: Use the Language=SQL qualifier of the RMU Extract command.
40.887 – NORDOANSI
ANSI style protections can not be defined using RDO - ignored Explanation: The database has ANSI-style ACL's which cannot be represented using RDO. User Action: Use the Language=SQL qualifier of the RMU Extract command to extract ANSI protections.
40.888 – NORDOMULTSCH
ANSI style multischema can not be defined using RDO - ignored Explanation: The database has an ANSI-style multischema which cannot be represented using RDO. User Action: Use the Language=SQL qualifier of the RMU Extract command to extract a multischema database definition.
40.889 – NORDOVERT
the expression storage area <str> cannot be represented in RDO Explanation: Indexes cannot be vertically partitioned using RDO. User Action: Use the Language=SQL qualifier of the RMU Extract command.
40.890 – NORDOVRP
Vertical record partitioning cannot be defined using RDO - ignored. Explanation: The database has vertical record partitioning defined in a storage map which cannot be represented using RDO. User Action: Use the Language=SQL qualifier of the RMU Extract command to extract storage maps with vertical record partitioning.
40.891 – NORELIDACC
The RDB$REL_REL_ID_NDX index could not be processed Explanation: The RDB$REL_REL_ID_NDX index must be scanned to identify the valid relation record types. The index, or the RDB$DATABASE relation, or the root file is corrupt. User Action: Locate and correct the corruption with RMU ALTER, and try again. Alternatively, you can use the RMU Restore command to restore the database from a backup file.
40.892 – NOREQIDT
reached internal maximum number of simultaneous timer requests Explanation: All allocated timer request ID slots, used to uniquely identify timers, are in use. Therefore, this timer request could not be serviced at this time.
40.893 – NORTNSRC
source for <str> "<str>" missing in module "<str>" - routine not extracted Explanation: RMU Extract uses the original SQL source text from the RDB$ROUTINE_SOURCE column in the RDB$ROUTINES system table when you specify the Item=Modules qualifier. This source has been removed, probably to restrict users from viewing the source for the stored module. User Action: If the module is created by the RdbWEB software, then this is the expected behavior; regenerate the module instead of using RMU Extract. If this is a customer defined module, then investigate why the source is missing.
40.894 – NORTUPB
no more user slots are available in the database Explanation: The maximum number of users are already accessing your database. User Action: Try again later.
40.895 – NOSEGUNL
Table "<str>" contains at least one segmented string column
40.896 – NOSEQENT
sequence id <num> has no valid entry in the root file Explanation: Sequence with the sequence id is present in the RDB$SEQUENCES table but does not have a valid entry in the root file. Either the seq is not marked as being used in root file or the condition minvalue <= next value <= maxvalue is false
40.897 – NOSEQROW
sequence id <num> has an entry in the root file but no row in RDB$SEQUENCES Explanation: Sequence with id indicated has a root file entry but not corresponding row in the table RDB$SEQUENCES.
40.898 – NOSHAREDMEMORY
Cannot create shared memory. Explanation: A parallel operation was specified that required a global memory section to be shared among the processes. The memory section could not be created. User Action: Look at the secondary message that describes the reason for the failure of the shared memory creation.
40.899 – NOSHUTDOWN
database shutdown not allowed while backup processes are active Explanation: One or more database or AIJ backup utilities are active. Database shutdown is not permitted while these types of utilities are active. User Action: Wait for the utilities to complete, or shutdown the database using the /ABORT=DELPRC qualifier.
40.900 – NOSIP
transaction is not a snapshot transaction Explanation: You have already started a transaction that is not a snapshot transaction. User Action: Use COMMIT or ROLLBACK to terminate your current transaction. Use READY BATCH RETRIEVAL to start a new snapshot transaction.
40.901 – NOSNAPS
snapshots are not allowed or not enabled for area <str> Explanation: Snapshots are not allowed or not enabled for this area. User Action: This is a normal situation created by the database definition or by a change- or modify-database command. Check with your DBA to make sure this situation is desirable.
40.902 – NOSPACE
no space available on page for new dbkey Explanation: You issued an RMU ALTER DEPOSIT command that requires more space than is available on the page. User Action: Space must be created by moving data from the current page to other pages in the database. While this is difficult, it is not impossible.
40.903 – NOSRVSUP
Network error: the operation is not supported by this version of the server. Explanation: An operation was requested of the RMU Server which it cannot perform because the operation was not available for that version of the server. User Action: None.
40.904 – NOSTATS
statistics are not enabled for <str> Explanation: An attempt was made to show statistics for a database that currently has statistics' collection disabled. User Action: Enable statistics and try again.
40.905 – NOSUCHACE
ACE for object <str>, does not exist<str> Explanation: The specified ACE could not be found in the database root file ACL. User Action: Use the RMU Show Privilege command to display the existing ACEs in the root file ACL. Then try your command again, if appropriate.
40.906 – NOSUCHAIJ
no such AIJ journal "<str>" Explanation: The specified AIJ journal does not exist for the database. User Action: You may select an AIJ journal using either the AIJ name or the default or current AIJ file specification. The list of valid AIJ journals can be obtained by dumping the database header information.
40.907 – NOSUCHUSER
unknown user "<str>" Explanation: An attempt was made to access information with a user name unknown to the database (for example, sending mail to the monitor or attempting to execute a database recovery process (DBR)) or to access information with a user name unknown to the operating system (for example, no record in the UAF file). User Action: Make sure the user name is spelled correctly and has been properly identified to either the database or the operating system. Do not attempt to run DBR from DCL; this is not allowed, because the system will automatically manage database recovery. Be sure the monitor user name is correctly specified.
40.908 – NOSUPPORT
<str> is not supported on this platform. Explanation: You are trying to use a command or qualifier that is not available on this platform.
40.909 – NOSYSTREC
System record not found at logical dbkey <num>:<num>:<num>. Explanation: A system record was expected but not found at the given dbkey. User Action: Try correcting the error with the RMU RESTORE command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.910 – NOT1STVOL
<str> is not the start of a backup volume set Explanation: The mounted volume is not the first of the volume set. User Action: Mount the correct volume.
40.911 – NOTABM
command not allowed - not ABM page Explanation: The command is being issued on a non-ABM page. User Action: Issue an RMU ALTER PAGE command to change the context to an ABM page.
40.912 – NOTABMPAG
page <num> in area <str> is not an ABM page Explanation: A page in an ABM page chain is not an ABM page. This probably indicates a corruption in the RDB$SYSTEM area. User Action: If there is a corruption that causes this error, correct the error with the RMU Restore command, and verify the database again.
40.913 – NOTAIP
command not allowed - not AIP page Explanation: The RMU ALTER ENTRY command is being issued on a non-AIP page. User Action: Issue an RMU ALTER PAGE command to change the context to an AIP page.
40.914 – NOTALLDAT
Not all data for database <str> can be loaded from system tables - verify continuing. Explanation: Because of database corruption not all the data necessary for verification of the specified database object can be loaded. The verify will continue. User Action: Look for other verify diagnostics that indicate the source of this corruption.
40.915 – NOTALSAUTO
AIJ Log Server is not automatically invoked by database monitor Explanation: In order to initiate database replication on the master database, it is required that the AIJ Log Server be automatically invoked by the database monitor. User Action: Change the AIJ Log Server invocation mode from "MANUAL" to "AUTOMATIC".
40.916 – NOTANSI
tape is not valid ANSI format Explanation: The tape labels do not conform to ANSI standards. User Action: If you are attempting an RMU RESTORE operation, mount the correct tape. If you are attempting an RMU BACKUP operation, reinitialize this tape.
40.917 – NOTANSI2
<str> are not supported in ANSI/ISO SQL2 - ignored Explanation: ANSI SQL2 does not support this feature, so this definition cannot be represented in ANSI SQL2. User Action: You may decide to alter the definition so that it can be represented in ANSI SQL2 or use another language to extract the definition.
40.918 – NOTANSI89
<str> are not supported in ANSI/ISO SQL - ignored Explanation: ANSI SQL-89 does not support this feature, so this definition cannot be represented in ANSI SQL. User Action: You may decide to alter the definition so that it can be represented in ANSI SQL or use another language to extract the definition.
40.919 – NOTBACFIL
<str> is not a valid backup file Explanation: The backup file specified could not be used. The specification may be in error, or there may be an operational error such as an improperly mounted tape or a file damaged during transmission. If you have issued the RMU Restore command, the file format may not be valid. User Action: Correct the backup file specification or operational error. In the case of a disk-format backup file, if the OpenVMS file attributes have been damaged or lost, it may be possible to reset them with a command similar to SET FILE /ATTRIBUTE=(MRS:32256,LRL:32256,RFM:FIX).
40.920 – NOTBLKSIZ
invalid block size in backup file
40.921 – NOTBOOL
expression in AND, OR, or NOT was not a Boolean Explanation: The Boolean evaluator was processing an expression or subexpression of the form "A AND B", "A OR B", or "NOT A". Either the "A" or the "B" expression was not in the proper form. The correct forms are "NOT X", "X EQ Y", "X NE Y", "X LT Y", "X GT Y", "X LE Y", "X GE Y", "X CONTAINS Y", or "X MATCHES Y". The operand of NOT, and both sides of AND and OR expressions, must be Boolean expressions. User Action: Rewrite the expression to have the proper format.
40.922 – NOTBOUND
command not allowed - not currently bound to a database Explanation: A command has been issued which requires that the user be attached to a database. User Action: Issue an RMU ALTER ATTACH command to attach (bind) to a database.
40.923 – NOTCORRUPT
storage area <str> is not corrupt Explanation: An RMU ALTER UNCORRUPT command was issued to uncorrupt the named storage area but the area was not currently marked corrupt. User Action: None.
40.924 – NOTDBSMGR
You must run in the dbsmgr account. Explanation: RMU can only be invoked from the dbsmgr account. User Action: Log in as dbsmgr before invoking RMU.
40.925 – NOTDSKFIL
filename does not specify disk device type Explanation: A file name was specified which does not reference a disk oriented device type. User Action: Check the file name for a proper disk device type.
40.926 – NOTENCRYPT
Save set is not encrypted. Ignoring /ENCRYPT qualifier Explanation: The save set is encrypted and the user specified a decryption key. User Action: Remove encryption key from command.
40.927 – NOTENUFBUF
requested number of global buffers (<num>) is more than USER LIMIT (<num>) Explanation: The user has requested more global buffers than are allowed for a single user. User Action: Either reduce the number of requested global buffers or increase the number of global buffers that a user may allocate. See documentation for descriptions of USER LIMIT clause used when creating and/or opening a database.
40.928 – NOTEXTENDED
area <str> cannot be extended to <num> page(s) Explanation: The extension of the specified storage area was not possible. This condition is possible if the specified new size is less than the current storage area allocation. This condition can also occur when attempting to change the size of a WORM device. User Action: Specify a new page count that is larger than the current area allocation.
40.929 – NOTFIRVOL
Backup set file number <num>, the first backup file specified,is not the first file of the backup set. Specify all the backup set files ordevices in the correct order. Explanation: The first backup set file specified was not the first backup set file specified by the backup command. User Action: Repeat the restore, specifying all the backup set files in the correct order.
40.930 – NOTIMPLYET
feature is not implemented yet Explanation: You attempted to access a feature that has been planned but has not been implemented yet. User Action: Avoid this feature.
40.931 – NOTINRANGE
value not within specified range of acceptable values Explanation: The value of the translated logical name is not in the range of acceptable values. User Action: Delete the logical name, or redefine it with a value in the acceptable range.
40.932 – NOTIP
no transaction in progress Explanation: You attempted to execute a DML verb, but there is no transaction in progress yet. User Action: Execute a READY statement before executing any other DML statements.
40.933 – NOTLAREA
"<str>" is not a logical area name Explanation: The logical area name is incorrect or misspelled. User Action: Correct the error and try again.
40.934 – NOTONABM
command not allowed on ABM page Explanation: The command is being issued on an ABM page. User Action: Issue a PAGE command to change the context to a non-ABM page.
40.935 – NOTONAIP
command not allowed on AIP page Explanation: The command is being issued on an AIP page. User Action: Issue an RMU ALTER PAGE command to change the context to a non-AIP page.
40.936 – NOTRANAPP
no transactions in this journal were applied Explanation: This journal file contains transactions that cannot be applied to the specified backup of the database. User Action: Be sure you are using the correct database backup and journal file.
40.937 – NOTREQVFY
not all requested verifications have been performed Explanation: It is not possible to access some system relations. The database is probably corrupt. Only some of the requested verifications were performed. User Action: The database may need to be restored.
40.938 – NOTROOT
not a root file Explanation: The specified file is not a database root file. User Action: Specify a database root file and try again.
40.939 – NOTSFDB
This command is not allowed for a single file database Explanation: This comamnd is illegal for a single file database. User Action: Only use this command for a multi-file database
40.940 – NOTSNBLK
no more user slots are available in the database Explanation: The maximum number of users are already accessing your database. User Action: Try again later.
40.941 – NOTSPAMPAG
current page is not a space management page Explanation: The operation is trying to access a data page using the space management page format. User Action: If you want access to a space management page, then make the current page a space management page and repeat the operation. If the current page is correct, then you can only reference it with the data page format, and a new operation is needed.
40.942 – NOTSQLCONS
constraints are not supported in SQL - syntax conversion not possible - ignored Explanation: SQL does not support constraints outside of table definitions, so this definition cannot be represented in SQL. User Action: You may decide to alter the definition so that it can be represented in SQL or use the Language=RDO qualifier of the RMU Extract command to extract the definition.
40.943 – NOTSTAREA
storage area <str> does not exist in this database Explanation: The storage area name is incorrect. It is probably misspelled. User Action: Use a valid storage area name.
40.944 – NOTSUPFORVER
The function <str> is not supported for <str> Explanation: This function or qualifier is not supported for this version of Rdb. User Action: Repeat the command but do not specify this function or qualifier.
40.945 – NOTSUPPORTED
<str> are not supported in RDO - syntax conversion not possible for <str> <str>.<str> Explanation: RDO does not support this feature, so this definition cannot be represented in RDO. User Action: You may decide to alter the definition so that it can be represented in RDO or use the Language=SQL qualifier of the RMU Extract command to extract the definition.
40.946 – NOTSYSCONCEAL
non-system concealed device name in filename Explanation: A concealed device name must be defined in the system logical table. User Action: If the device name has to be concealed, then define it in the system logical table.
40.947 – NOTUNLFIL
Input file was not created by RMU UNLOAD Explanation: The input file does not contain the header information recorded by the RMU Unload command. User Action: Check to see that the correct file was specified.
40.948 – NOTVRPREC
Dbkey <num>:<num>:<num> is not vertically partitioned. Explanation: A record was found that should have been vertically partitioned but was not. User Action: Verify the page in question, and see if the database needs to be restored.
40.949 – NOTVRPSEC
Vertical partition <num> points to non-partitioned record at <num>:<num>:<num>. Explanation: The primary vertically partitioned segment points to a secondary record that is not vertically partitioned. User Action: Verify the page in question, and see if the database needs to be restored.
40.950 – NOTWRMHOL
Expected a WORM hole in WORM storage area <str> at page <num>. Explanation: A WORM storage area page, beyond the area's last initialized page, is not a WORM hole. It should be a WORM hole, because it should have never been written to. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement, and verify the database again.
40.951 – NOTZEREOF
Storage area <str> is not a WORM storage area. It should have a logical end-of-file of zero. Logical end-of-file is <num>. Explanation: A storage area not having the WORM property has a non-zero logical end-of-file. Non-WORM areas must have zero logical end-of-files. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement, and verify the database again.
40.952 – NOT_BOUND
database is not bound Explanation: You have not bound to a database yet, or you have unbound the database and have not bound to another one yet. User Action: Bind to a database before continuing.
40.953 – NOT_FULL
area <str> is not marked as full Explanation: You issued an RMU ALTER MAKE NOT FULL command to unmark the full flag of the named storage area, but the area was not currently marked as full. User Action: None.
40.954 – NOT_LARDY
area for <num>:<num>:<num> not in proper ready mode Explanation: You attempted to access a logical area for which you have not declared your intentions. User Action: Retry a ready operation later.
40.955 – NOT_READY
storage area <str> not readied Explanation: You attempted to access an area for which you have not declared your intentions. User Action: If the area is included in your subschema, you can use the READY statement to prepare it for processing.
40.956 – NOT_UPDATE
storage area <str> not readied in update usage mode Explanation: You attempted to modify the contents of an area without having declared your intentions. User Action: If you have not readied the area yet, you can READY for UPDATE. If you have already readied it, you must abort your transaction by executing a ROLLBACK before you can READY for UPDATE.
40.957 – NOUSER
invalid user number Explanation: The ROOT USER command references a user that does not exist. User Action: Issue the command, referencing a valid user.
40.958 – NOVERENT
Unable to verify external routine <str>. Image name is <str>. Entry point is <str>. Explanation: RMU cannot verify that the external routine can be activated. This message will be displayed if RMU has been installed with privileges (this is the default configuration) and either of the following conditions exists: 1. The image for the external routine can be located via /SYSTEM logicals but is not installed with the VMS INSTALL utility. 2. The image for the external routine can be located but not via only /SYSTEM logicals. User Action: Specify the image location in the external routine using system- wide executive-mode logical names and install the image as a known image. Alternately, retry the verify operation using a non privileged RMU.
40.959 – NOWILD
no wild cards are allowed in the file specification
Explanation: Wild-card characters ("*" and "%") cannot be used
in that file specification.
User Action: Use a file specification without wild-card
characters.
40.960 – NOWORMSPT
WORM areas are not supported Explanation: An attempt was made to declare an area as having the WORM attribute. At this time, there is no longer support for this attribute. User Action: Contact your Oracle support representative for assistance.
40.961 – NOWRMAREA
No WORM areas found with journaling disabled. Explanation: You are executing the RMU Repair command with the Worm_Segments qualifier. You specified no arguments for the Areas qualifier, and the database does not have any WORM areas with journaling disabled. User Action: Verify that you are using the correct database.
40.962 – NOWRMFLD
No segmented string fields found for relation <str>. Explanation: You are executing the RMU Repair command with the Worm_Segments qualifier and you listed a relation that does not have any segmented string fields. User Action: Verify that you are listing the relations that have segmented string fields that may have had segments stored in the WORM areas that is in need of recovery.
40.963 – NO_EXEC
No executor processes are available at this time. Explanation: No executor processes are available to process your requests at this time. User Action: Try again later.
40.964 – NO_REQUEST
Network error: Request buffer was not available. Explanation: Request buffer was not available. User Action: Contact your Oracle support representative for assistance.
40.965 – NPARQUAL
Qualifier not valid for parallel operation "<str>". Explanation: An qualifier was specified that cannot be used for parallel operations. User Action: Remove this qualifier or do not attempt a parallel operation.
40.966 – NTSNPFCON
Field CONSISTENT is not valid for snapshot files Explanation: The field that you were trying to display or deposit does not exist for a snapshot area. User Action: This is not a legal RMU ALTER command. You can only display and deposit legal snapshot fields.
40.967 – NTSNPFNF
Field NOT FULL is not valid for snapshot files Explanation: The field that you were trying to display or deposit does not exist for a snapshot area. User Action: This is not a legal RMU ALTER command. You can only display and deposit legal snapshot fields.
40.968 – NTSNPFUNC
Field UNCORRUPT is not valid for snapshot files Explanation: The field that you were trying to display or deposit does not exist for a snapshot area. User Action: This is not a legal RMU ALTER command. You can only display and deposit legal snapshot fields.
40.969 – OLAPONPAG
unexpected overlap on page <num> free space end offset : <num> (hex) minimum offset of any line : <num> (hex) Explanation: An overlap was found between the end of free space and the beginning of the line closest to the beginning of the page. This could be caused by the corruption of locked free space length, free space length, or the line index. User Action: Dump the page in question to determine the corruption. Restore the database and verify again.
40.970 – OPERCLOSE
database operator requested database shutdown Explanation: Your program has been terminated because the database operator shut down the database you were using. User Action: Try again later after the database shutdown is complete.
40.971 – OPERFAIL
error requesting operator service Explanation: Communication with the operator through OPCOM failed. User Action: Correct the problem with OPCOM or reissue the command interactively.
40.972 – OPERNOTIFY
system operator notification: <str> Explanation: The indicated message was sent to one of the configured system operators. User Action: Examine the indicated message and perform the appropriate operation.
40.973 – OPERSHUTDN
database operator requested monitor process shutdown Explanation: Your program has been terminated because the database operator shut down the database monitor process. User Action: Try again later after the database shutdown is complete.
40.974 – OPNFILERR
error opening file <str> Explanation: An error occurred when an attempt was made to open a file. Either the the file does not exist, or you do not have sufficient privileges to perform the operation. User Action: Make sure the file exists. If the file exists, you need to get sufficient privileges to access the file.
40.975 – OPNSNPARE
opened snapshot area <str> for <str> <str> Explanation: The indicated storage area has been opened, either because it is the current storage area being verified, or set chain verification has been specified and a storage set occurrence requires opening the storage area to verify the set chain.
40.976 – OPTDDLREC
TSN <num>:<num> contains DDL information that cannot be optimized Explanation: The identified transaction contains an AIJ record with DDL information. DDL information cannot be optimized and forces a flush of the accumulated SORT information. Too many of these operations limit the effectiveness of the resulting optimized after-image journal and decrease the overall optimization performance. User Action: No user action is required.
40.977 – OPTEXCMAX
TSN <num>:<num> record size <num> exceeds maximum <num> record size Explanation: The identified transaction contains an AIJ record whose size exceeds the maximum specified sort record size. During AIJ optimization, fixed-length data records are passed to the sort utility. By default, the size of the sort records is 1548 bytes in length, which is also the maximum value allowed. The sort record length affects the amount of disk space required to complete the AIJ optimization operation. The size of the record passed to the sort utility can be adjusted using the <fac>$BIND_OPTIMIZE_AIJ_RECLEN logical. User Action: If possible, increase the size of the sort record using the <fac>$BIND_OPTIMIZE_AIJ_RECLEN logical.
40.978 – OPTEXCSRT
AIJ record count exceeded specified <num> sort threshold Explanation: The number of AIJ records processed exceeded the maximum sort threshold specified by the <fac>$BIND_OPT_SORT_THRESHOLD logical name. This is not a fatal error. User Action: None. Use of the <fac>$BIND_OPT_SORT_THRESHOLD may reduce the sort work file disk space required for the AIJ optimization operation. However, this may result in a larger output file.
40.979 – OPTEXCTXN
TSN <num>:<num> error count exceeded <num> failure threshold Explanation: The number of AIJ optimize errors exceeded the transaction error threshold specified by the <fac>$BIND_OPT_TXN_THRESHOLD logical name. This is not a fatal error. User Action: None. The remainder of the transaction contents are written directly to the optimized AIJ file. Use of the <fac>$BIND_OPT_TXN_THRESHOLD logical name may actually increase the AIJ optimize operation performance as the number of required sort operations is reduced. However, this may result in a larger output file.
40.980 – OPTINCONSIS
optimized AIJ file is inconsistent with the database Explanation: The database and/or some areas within the database are not consistent with the optimized AIJ file. The last transaction committed to the database and/or to some database areas is not the same as the last transaction committed to the database at the time the optimized AIJ file's original AIJ file was created. To use an optimized AIJ file for recovery, it must be consistent with the database and all areas. User Action: Use the original, non-optimized AIJ file to do the recovery.
40.981 – OPTIONERR
error processing option <str> Explanation: An internal error was detected while processing this option. User Action: Contact your Oracle support representative for assistance. You will need to provide the command which caused this error.
40.982 – OPTLIN
<num> : <str>
40.983 – OPTNOAREAREC
cannot do by-area recovery with an optimized AIJ file Explanation: A recover-by-area operation was attempted with an optimized AIJ file. Optimized AIJ files do not support recovery by area, so the recovery operation was aborted. User Action: Use the original, non-optimized AIJ file to do the by area recovery.
40.984 – OPTNOUNTILREC
cannot do a /RECOVER/UNTIL with an optimized AIJ file Explanation: A recover operation specifying an "until" time is not allowed with an optimized AIJ file. No recovery is performed if this condition is specified. User Action: Use the original, non-optimized AIJ file to do the /RECOVER/UNTIL operation.
40.985 – OPTRECLEN
AIJ optimization record length was <num> characters in length Explanation: During AIJ optimization, fixed-length data records are passed to the sort utility. By default, the size of the sort records is 1548 bytes in length, which is also the maximum value allowed. The sort record length affects the amount of disk space required to complete the AIJ optimization operation. The size of the record passed to the sort utility can be adjusted using the <fac>$BIND_OPTIMIZE_AIJ_RECLEN logical. This message indicates the size of the largest AIJ record passed to the sort utility that was less than or equal to the maximum sort record length. User Action: No user action is required.
40.986 – OPTSRTSTAT
<str>: <num> Explanation: During optimization operations, statistics are often collected to aid the user in tuning. This message displays a single statistic.
40.987 – OPTSYNTAX
Syntax error in options file <str>'<str>'<str> Explanation: A command from the options file has a syntax error. User Action: Edit the file and fix the indicated command.
40.988 – OUTFILDEL
Fatal error, output file deleted Explanation: A non recoverable error was encountered. The output file was deleted. User Action: Locate and correct the source of the original error.
40.989 – OUTFILNOTDEL
Fatal error, the output file is not deleted but may not be useable,<num> records have been unloaded. Explanation: A non recoverable error was encountered. The output file was not deleted. However, depending on the error the output file may not be useable. User Action: Locate and correct the source of the original error.
40.990 – OVERFLOW
data conversion overflow Explanation: A loss of information would have occurred on a data item transformation. The operation was not performed. User Action: Correct the error and try the operation again.
40.991 – PAGBADARE
area <str>, page <num> maps incorrect storage area expected: <num>, found: <num> Explanation: The storage area id number on the database page is not the storage area id number of the storage area currently being verified. Verification of the page continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.992 – PAGCKSBAD
area <str>, page <num> contains an invalid checksum expected: <num>, found: <num> Explanation: The checksum on the page is incorrect. Verification of the page continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.993 – PAGECNSTNT
Page <num> is already consistent. Explanation: You are trying to set a page as consistent when it is not marked corrupt. User Action: Specify a different page, and enter the command again.
40.994 – PAGERRORS
<num> page error(s) encountered
<num> page header format error(s)
<num> page tail format error(s)
<num> area bitmap format error(s)
<num> area inventory format error(s)
<num> line index format error(s)
<num> segment format error(s)
<num> space management page format error(s)
<num> difference(s) in space management of data page(s)
Explanation: This message indicates how many page format errors
were encountered while scanning a particular storage area that
has space management pages. Data pages which contain format
errors will not have storage segments verified.
40.995 – PAGESIZETOOBIG
Database must be off-line to assign a page size of <num> to area <str>. Explanation: The page size specified for this area is larger than the buffer size supported by the database. To accommodate the larger page size, the buffer size of the database would have to grow. The buffer size cannot be changed for an online database. User Action: Either settle for a smaller page size or perform the operation off line.
40.996 – PAGFLUBAD
area <str>, page <num> (free space+locked free space+line index) length greater than the expected size expected less than <num>, found: <num> Explanation: The sum of the number of bytes of all storage records mapped by all the entries in the line index plus the number of bytes of free space, both locked and unlocked, is larger than expected. Further verification of the page, including segment and set occurrence chain verification, is abandoned. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.997 – PAGFRSODD
area <str>, page <num> has odd locked free space found: <num> bytes Explanation: Free space should be word-aligned. The page header indicates byte-aligned locked free space. Further verification of the page, including segment and set occurrence chain verification, is abandoned. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.998 – PAGINCON
area <str>, page <num> page is marked as inconsistent. Explanation: The page has been restored but not recovered. User Action: Make the page consistent by using the RMU Recover command.
40.999 – PAGINCONSIS
page is inconsistent Explanation: An attempt was made to fetch an inconsistent page. This page cannot be accessed until it is consistent. User Action: Take the proper action to make the page consistent. For example, perform a RESTORE/RECOVER operation for a data or AIP page, or a REPAIR operation for a SPAM or ABM page.
40.1000 – PAGISCRPT
Page <num> is already marked corrupt. Explanation: You are trying to mark as corrupt a page that is already marked corrupt. User Action: Specify a different page, and enter the command again.
40.1001 – PAGLILINV
area <str>, page <num> has a line index length greater than the expected size expected less than <num>, found: <num> Explanation: The maximum available space on a given database page is (page length - length of page header - length of one line index entry - length of line index count). The number of entries specified in the line index count multiplied by the length of an individual line index entry indicates a size greater than this maximum. Further verification of the page, including segment and set occurrence chain verification, is abandoned. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1002 – PAGLIXBIG
area <str>, page <num> line index maps a total length greater than the expected size expected less than <num>, found: <num> Explanation: The sum of the number of bytes of all storage records mapped by all the entries in the line index is larger than expected. Further verification of the page, including segment and set occurrence chain verification, is abandoned. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1003 – PAGLIXFRS
area <str>, page <num> line index entry <num> maps free space at offset <num> (hex) Explanation: The line index entry specifies an offset that is partially or totally allocated to either locked or unlocked free space on the page. Further verification of the page, including segment and set occurrence chain verification, is abandoned. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1004 – PAGLIXHOL
area <str>, page <num> line <num>, line <num> hole of <num> bytes Explanation: Storage records are stored contiguously on the database page. A hole between two storage records, space claimed by neither storage record, is indicated by the specified line index entries. The line index or one or both of the storage records is incorrect. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1005 – PAGLIXODD
area <str>, page <num> line index entry <num> has an odd offset found offset <num> Explanation: Storage records on a database page should be word-aligned. The line index entry indicates a byte-aligned storage record. Further verification of the page, including segment and set occurrence chain verification, is abandoned. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1006 – PAGLIXOVL
area <str>, page <num> line <num>, line <num> overlap of <num> bytes Explanation: Storage records are stored contiguously on the database page. An overlap of two storage records, space claimed by both storage records, is indicated by the specified line index entries. The line index or one or both of the storage records is incorrect. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1007 – PAGLIXSML
area <str>, page <num> line index entry <num>, length too small expected at least <num>, found: <num> Explanation: The line index entry specifies a storage record length that is smaller than the length of one database id number. Storage records must be at least this long. Further verification of the page, including segment and set occurrence chain verification, is abandoned. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1008 – PAGLUBAD
area <str>, page <num> has free space (locked+unlocked) greater than expected expected no more than <num>, found: <num> Explanation: The maximum available space on a given database page is (page length - length of page header - length of one line index entry - length of line index count). The total free space on the page, indicated by the sum of the locked and unlocked free space, is greater than this amount. Further verification of the page, including segment and set occurrence chain verification, is abandoned. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1009 – PAGPAGRAN
area <str>, page <num> page number out of range expected: <num>, found: <num> Explanation: The page number on the database page is not within the range of valid page numbers for the storage area stated in the storage schema. Verification of the page continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1010 – PAGPAGSEQ
area <str>, page <num> page number out of sequence expected: <num>, found: <num> Explanation: The page number on the database page is not +1 greater than the preceding page number. Verification of the page continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1011 – PAGSYSREC
area <str>, page <num> system record contains an invalid database ID expected: <num> (hex), found: <num> (hex) Explanation: This verification is performed here only if OPT=PAGES is specified. Otherwise, it is performed during segment verification. The SYSTEM record id was not found in the system record on this page. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1012 – PAGTADINV
area <str>, page <num> contains incorrect time stamp expected between <time> and <time>, found: <time> Explanation: The time stamp on the page specifies a time later than the time that the verification began. Such a time is incorrect. Verification of the page continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1013 – PAGTADZER
area <str>, page <num> contains zero time stamp Explanation: The time stamp on the page is zero, that is, 17-NOV-1858 00:00:00.00. Verification of the page continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1014 – PAGUFSODD
area <str>, page <num> has odd unlocked free space found: <num> bytes Explanation: Free space should be word-aligned. The page header indicates byte-aligned unlocked free space. Further verification of the page, including segment and set occurrence chain verification, is abandoned. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1015 – PARTDTXNERR
error when trying to participate in a distributed transaction Explanation: The process was unable to participate in the DECdtm distributed transaction, because of a DECdtm error. This error is returned in the secondary error message. User Action: Look at the secondary error message, make the necessary correction, and try the operation again.
40.1016 – PARTLVFY
continuing partial verification Explanation: It was not possible to access some system relations. The database is probably corrupt. As much of the requested verification will be performed as is possible. User Action: You may need to restore the database.
40.1017 – PARTNDCNT
error getting the count of partitioned indices Explanation: It was not possible to get the count of partitioned indices, probably because the RDB$INDICES system relation is corrupted. User Action: Rebuild the indexes.
40.1018 – PBCKMISSA
Executor count is greater than number of storage areas. Explanation: The number of executors specified by the Executor_Count qualifier is greater than the number of storage areas included in the backup operation. The unused executors are ignored. User Action: Decrease the number of executors.
40.1019 – PBCKMISTD
Executor count is greater than number of tape drives or disk directories. Explanation: The number of executors specified with the Executor_Count qualifier is greater than the number of tape drives or disk directories listed with the backup file parameter for a parallel backup command. For this error, a tape master and its slave are considered one tape drive. The unused executors are ignored. User Action: Decrease the number of executors or add more tape drives or disk directories to the operation.
40.1020 – PBCKXTRND
Executor count is less than number of nodes specified. Explanation: The number of executors specified with the Executor_Count qualifier is less than the number of nodes listed with the Node qualifier for a parallel backup command. The extra node names specified are ignored, User Action: Increase the number of executors or reduce the number of nodes in the node list.
40.1021 – PCNTZERO
line <num> has no pointer clusters Explanation: You issued an RMU ALTER DISPLAY or DEPOSIT LINE m CLUSTER n command for a record with no clusters. User Action: This is not allowed. Clusters cannot be displayed or deposited into if the storage record has no clusters.
40.1022 – PGSPAMENT
area <str>, page <num> the fullness value for this data page does not match the threshold value in the space management page expected: <num>, computed: <num> Explanation: The data page's percentage fullness falls outside of the range of percentage fullness represented by the data page's space management page code. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again. Alternatively, the RMU Repair command may be used with the SPAM qualifier to correct this problem. Please note that the RMU Repair command should not be used unless you have a backup copy or an exported copy of the database. Because the RMU Repair command does not write its transactions to an after-image journal file, a repaired database cannot be rolled forward in a recovery process. Therefore, Oracle Corporation recommends that a full and complete database backup be performed immediately after using the RMU Repair command.
40.1023 – PGSPMCLST
area <str>, page <num> the <num>% fullness value for this data page does not fall within the <num>-<num>% range found on the space management page Explanation: The data page's percentage fullness of a mixed area falls outside of the range of percentage fullness represented by the data page's space management page code. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again. Alternatively, the RMU Repair command may be used with the SPAM qualifier to correct this problem. Please note that the RMU Repair command should not be used unless you have a backup copy or an exported copy of the database. Because the RMU Repair command does not write its transactions to an after-image journal file, a repaired database cannot be rolled forward in a recovery process. Therefore, Oracle Corporation recommends that a full and complete database backup be performed immediately after using the RMU Repair command.
40.1024 – PLNBADMAS
Executor <str> cannot be a coordinator executor. Explanation: The Coordinator keyword was specified for an executor other than the first executor. User Action: Remove the Coordinator keyword for the named executor or move the named executor to the first entry of the executor list.
40.1025 – PLNBADSA
Storage area <str> does not exist in the database. Explanation: The named storage area was specified for backup, but it does not exist in the database. User Action: Remove the storage area from the plan file.
40.1026 – PLNDRVMAS
Drive <str> is the first drive in the list and must be a master. Explanation: The first drive in a list of drives is not a master drive. User Action: Either specify the Master qualifier to the named drive or make another drive the first drive in the list of tape drives.
40.1027 – PLNDUPSA
Storage area <str> is listed multiple times in the plan. Explanation: The named storage area is listed multiple times in the plan file. User Action: Make sure the storage area appears only once in the plan file.
40.1028 – PLNDUPTAP
Drive <str> is listed multiple times in the plan. Explanation: A tape drive has been listed in multiple places in the plan file. User Action: Make sure the named drive appears only once among all of the tape drives specified in the tape drive lists.
40.1029 – PLNLABELS
Labels cannot be specified for the plan and for the executors. Explanation: Tape labels were specified for the entire backup operation and for each tape drive. A plan can specify either labels for the entire backup operation, or labels for the tape drives, but not both. User Action: Remove either the label list for the operation or the label lists for the tape drives.
40.1030 – PLNMASDIR
First executor in backup plan cannot be assigned disk directories. Explanation: The coordinator executor in the backup plan file does not write to disk. Therefore,it should not be assigned any disk directories. User Action: Remove the disk directory list from the definition of the coordinator executor.
40.1031 – PLNMASNOD
First executor in backup plan cannot be assigned to a node. Explanation: The first executor in a backup plan always runs on the node where the plan file is being read and executed. User Action: Remove the node name from the first executor.
40.1032 – PLNMASSA
First executor in backup plan cannot be assigned areas. Explanation: The coordinator executor in a backup plan does backup any areas. Therefore, it should not be assigned any storage areas. User Action: Remove the storage area list from the definition of the coordinator executor.
40.1033 – PLNMASSLV
First executor in backup plan cannot be a worker executor. Explanation: The first executor in a backup plan must be the coordinator executor. It is followed by worker executors only. User Action: Move the coordinator executor the beginning of the list of executors.
40.1034 – PLNMASTD
First executor in backup plan cannot be assigned tape drives. Explanation: The coordinator executor in the backup plan file does not write to tape. Therefore,it should not be assigned any tape drives. User Action: Remove the tape drive list from the definition of the coordinator executor.
40.1035 – PLNMISEXE
No executors listed in backup plan. Explanation: A list of executors is missing from the backup plan file. User Action: Use the List_Plan and Parallel qualifiers with the RMU Backup command to generate a plan file.
40.1036 – PLNMISMAS
First executor in backup plan is not the coordinator executor. Explanation: The first executor of backup plan must always be the coordinator executor. The subsequent executors must all be worker executors. User Action: Move the declaration of the coordinator executor to the beginning of the list of executors.
40.1037 – PLNMISRBF
No backup file specified in backup plan. Explanation: The backup plan file name is missing from the plan file. User Action: Add the name of the backup file to the plan file.
40.1038 – PLNMISRT
No database specified in backup plan. Explanation: The database root file is not specified in the plan file. User Action: Add the name of the database to the plan file.
40.1039 – PLNMISSLV
Executor <str> should be a worker executor. Explanation: The worker keyword must be used for all listed executors except the first. User Action: Add the worker keyword to the named executor or make the named executor the coordinator executor by moving it to the first entry in the list.
40.1040 – PLNNOSLV
There are no worker executors in the plan. Explanation: The plan has no worker executors. User Action: Add worker executors to the plan file.
40.1041 – PLNSAMIS
Executor <str> has no storage areas assigned to it. Explanation: The named executor requires a list of storage areas to backup. User Action: Add a list of storage area to the named executor.
40.1042 – PLNSLVDIR
Executor <str> has no disk directories assigned. Explanation: The named executor requires a list of disk directories. User Action: Add a list of disk directories to the named executor.
40.1043 – PLNSLVLAB
Labels must be assigned to all tape drives or none. Explanation: Some of the tape drives have been assigned a list of labels while other drives have not. User Action: Either assign labels to all drives are to none of the drives.
40.1044 – PLNSLVTD
Executor <str> has no tape drives assigned. Explanation: The named executor requires a list of tape drives. User Action: Add a list of tape drives to the named executor.
40.1045 – PLNSYNTAX
Syntax error in plan file <str>'<str>'<str>. Explanation: An option in the plan file has a syntax error. User Action: Edit the file and fix the indicated option.
40.1046 – PLNTDMORS
Drive <str> must be either a master or slave. Explanation: A tape drive has been given both the master and the slave attribute or has not been given either attribute. User Action: Make sure the tape drive has either the master or slave attribute.
40.1047 – PLNTOOLONG
Option in plan file exceeds 1024 characters in length. Explanation: An option line in the plan file exceeds the maximum length. User Action: Edit the plan file to fix the problem.
40.1048 – POSERROR
error positioning <str> Explanation: Tape device rejected the attempt to set the tape characteristics, or positioning. User Action: Correct the device or media problem, and reissue the command.
40.1049 – POSITERR
error positioning <str> Explanation: Tape device rejected the attempt to set the tape characteristics, or positioning. User Action: Correct the device or media problem, and reissue the command.
40.1050 – POSSCALE
positive scale not possible in SQL - <str>.<str> Explanation: SQL does not allow a positive scale. For example, DATATYPE SIGNED LONGWORD SCALE 2 cannot be represented in SQL. User Action: You may decide to alter the definition so that it can be represented in SQL or use the Language=RDO qualifier of the RMU Extract command to extract the definition.
40.1051 – PREMEOF
premature end of file encountered in <str> Explanation: A premature end-of-file was encountered while reading the specified file.
40.1052 – PREMEOFOPT
Unexpected end of file on restore options file Explanation: The last line of the options file was a continuation line. User Action: Edit the options file to fix the problem.
40.1053 – PREMEOFPLN
Unexpected end of file on plan file. Explanation: The last line of the plan file was a continuation line. User Action: Edit the plan file to fix the problem.
40.1054 – PREM_EOF
Unexpected end of file on record definition file Explanation: The last line of the file was a continuation line. User Action: Edit the file to fix the problem.
40.1055 – PREVACL
Restoring the root ACL over a pre-existing ACL. This is a normal condition if you are using the CDO utility. Explanation: The creation of the database also created an ACL, either by propagation from a previous version of the root or by propagation of default ACEs from the directory ACL. This ACL may supersede the access rights being restored from the backup file, and it may be necessary to use the RMU Set Privilege command to establish the desired access rights for the database. User Action: Use the RMU Show Privilege command to examine the restored access rights. If they are not what you want, correct them by using the RMU Set Privilege command. In extreme cases, OpenVMS privilege may be required to correct the problem. If the problem is caused by default ACEs in a directory ACL you may consider altering them or propagating RMU access to these ACEs.
40.1056 – PROCPLNFIL
Processing plan file <str>. Explanation: User action:
40.1057 – PROTOCOL
Network error: Protocol error. Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.1058 – PSEGDBKEY
Pointer segment is at logical dbkey <num>:<num>:<num>. Explanation: This message is used when dumping the segmented string context after a possible corruption is found. It reports the logical dbkey of the pointer segment currently being verified. User Action: This message is informational. No action is required.
40.1059 – QIOXFRLEN
data transfer length error - expected <num>, actual <num> Explanation: The expected data-transfer length was not equal to the actual data-transfer length. User Action: This is usually caused by a hardware problem.
40.1060 – QUIETPT
waiting for database quiet point at <time> Explanation: The user is waiting for the quiet lock in order to force a database quiet point. User Action: None.
40.1061 – QUIETPTREL
released database quiet point at <time> Explanation: The database quiet point lock has been released. User Action: None.
40.1062 – RCLMAREA
Reclaiming area <str>
40.1063 – RCSABORTED
record cache server process terminated abnormally Explanation: A detached record cache server process failed abnormally. User Action: Examine the database monitor log file and any SYS$SYSTEM:*RCSBUG.DMP bugcheck dump files for more information.
40.1064 – RCSMANYNODES
database node count exceeds record cache maximum of "1" Explanation: The record cache feature can only be used when after-image journaling is enabled, the "Fast Commit" feature is enabled, and the maximum node count is set to "1". User Action: Alter the database to set the maximum database node count to "1".
40.1065 – RCSRQSTFAIL
request to Record Cache Server failed Explanation: User submitted a request the RCS process which failed either during the submission process or, for synchronous requests, possibly during the execution of the request. User Action: Examine the secondary message(s) or the database monitor log file (SYS$SYSTEM:*MON.LOG) or any RCS log file in root file's directory or any SYS$SYSTEM:*RCSBUG.DMP bugcheck dump files for more information.
40.1066 – RDBSYSTEMREQ
The <str> <str> <str> storage area(s) must also be specified for the restore. Explanation: This error can occur during an incremental by-area restore operation when the RDB$SYSTEM and/or the default and/or the list segment storage areas are not included in the list of storage areas to be restored. If incremental changes have occured to any of these three storage areas and these areas are not restored then tables in the database affected by these changes will appear to be corrupted. User Action: Rerun the restore operation including the requested storage areas in the list of storage areas to restore.
40.1067 – RDYBTRNOD
ready needed for B-tree node at <num>:<num>:<num> Explanation: An attempt to ready the logical area corresponding to a B-tree index node failed. This could be because of an invalid logical area in a dbkey in the index. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. Rebuild the index if it is corrupt.
40.1068 – RDYDUPBTR
ready needed for duplicate B-tree node at <num>:<num>:<num> Explanation: An attempt to ready the logical area corresponding to a duplicate B-tree index node failed. This could be because of an invalid logical area in a dbkey in the index. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. Rebuild the index if it is corrupt.
40.1069 – RDYHSHBKT
ready needed for hash bucket at <num>:<num>:<num> Explanation: An attempt to ready the logical area corresponding to a hash bucket failed. This could be because of an invalid logical area in a dbkey in the system record. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. Rebuild the index if it is corrupt.
40.1070 – RDYHSHDAT
Area <str> Could not ready logical area for data records of hashed index <str> Explanation: The logical area for the data records of the given hashed index could not be readied. This could be the result of an error, such as the specification of an invalid logical area identifier or of another user accessing the logical area in a conflicting access mode. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. Otherwise, try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1071 – RDYSEGSTR
ready needed for segmented string at <num>:<num>:<num> Explanation: An attempt to ready the logical area corresponding to a segmented string failed. This could be because of an invalid logical area in a dbkey in data record or because of a lock conflict with other users in the database. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. If there are no conflicting users of the database, correct the error with the RMU Restore command or the SQL IMPORT statement, and verify the database again.
40.1072 – READACLER
Error reading ACL for <str> Explanation: An error occurred when the ACL was accessed for the database root file. The secondary error message gives the reason for the failure. User Action: Correct the problem and try again.
40.1073 – READBLOCK
error reading block <num> of <str> Explanation: A media error was detected during an attempt to read from the backup file. User Action: None.
40.1074 – READERR
error reading <str> Explanation: Media error was detected while reading the backup file. User Action: None.
40.1075 – READERRS
excessive error rate reading <str> Explanation: Excessively large number of read errors were encountered on this tape volume. User Action: Check for media and/or drive maintenance problems.
40.1076 – READYDATA
ready needed for data record at <num>:<num>:<num> Explanation: An attempt to ready a logical area corresponding to a data record failed. This could be because of an invalid logical area in a dbkey in the index. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. If the index is corrupt, rebuild it.
40.1077 – READYDSEG
Could not ready logical area <num> for a data segment. Explanation: The logical area for a segmented string's data segment could not be readied. This could be the result of an error, such as, the specification of an invalid logical area identifier, or of another user accessing the logical area in a conflicting access mode. See accompanying messages for the segmented string context at the time of the error. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. Otherwise, try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1078 – READYPSEG
Could not ready logical area <num> for a pointer segment. Explanation: The logical area for a segmented string's pointer segment could not be readied. This could be the result of an error, such as the specification of an invalid logical area identifier, or of another user accessing the logical area in a conflicting access mode. See accompanying messages for the segmented string context at the time of the error. User Action: Check if there are conflicting users of the database. If so, verify this portion of the database when there are no conflicting users. Otherwise, try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1079 – READ_ONLY
read-only area <str> must be readied in RETRIEVAL mode only Explanation: A read-only area can be readied in RETRIEVAL mode only. User Action: Ready this area for retrieval or make the area read-write.
40.1080 – REBLDSPAM
Space management (SPAM) pages should be rebuilt for logical area <str>, logical area id <num> Explanation: The SPAM pages for the indicated logical area should be rebuilt since the logical area thresholds and/or record length have been modified. User Action: Use RMU/SET AIP /REBUILD_SPAM to rebuild the SPAM pages for all logical areas that have been changed. Alternately, explicitly name the mentioned logical area id or name on the RMU/SET AIP command to rebuild just that area.
40.1081 – REBUILDSPAMS
SPAM pages should be rebuilt for logical area <str> Explanation: Modifications have been made to the logical area parameters that may have made the SPAM thresholds inaccurate. The RMU utility should be used to rebuild the SPAM pages for the logical area.
40.1082 – RECBADVER
Invalid row version found at logical dbkey <num>:<num>:<num>. Expected a non-zero version not greater than <num>, found <num>. Explanation: Each change to a table definition creates a new version for the table Whenever a row is added to a table, the current version number is stored as part of the row. This message indicates that the row stored at the specified dbkey has a version which is larger than the maximum version number for the row. User Action: Restore and recover the page from backup.
40.1083 – RECDEFSYN
Syntax error in record definition file <str>'<str>'<str> Explanation: A line from the file has a syntax error. User Action: Edit the file to fix the indicated line.
40.1084 – RECFAILED
fatal, unexpected roll-forward error detected at AIJ record <num> Explanation: A fatal, unexpected error was detected by the database management system during the roll forward of an AIJ file. This typically is caused by a corrupt AIJ file or by applying an AIJ file out of sequence. User Action: Contact your Oracle support representative for assistance. Note that the indicated AIJ record number can be used to quickly locate the offending information in the AIJ journal using the appropriate DUMP/AFTER_JOURNAL/START=XXX command; it is recommended that when dumping the AIJ file, you use a starting record number that is several records prior to the indicated record, because the actual cause of the problem may be in preceding AIJ records.
40.1085 – RECLASTTSN
last successfully processed transaction was TSN <num>:<num> Explanation: A fatal, unexpected error was detected by the database management system during the roll forward of an AIJ file. This message indicates the "transaction sequence number" of the last transaction successfully processed by the AIJ roll-forward utility. User Action: Contact your Oracle support representative for assistance. Information concerning the identified transaction TSN can be obtained by dumping the AIJ journal, using the DUMP/AFTER_JOURNAL command.
40.1086 – RECSMLVRP
Primary vertical partition at <num>:<num>:<num> is too small. Explanation: The specified record is not big enough to hold the VRP information that the primary segment of a VRP record needs to hold. This information contains the dbkeys of the other vertical partitions for the record. User Action: Restore and recover the page of the primary dbkey from backup.
40.1087 – RECUNTIL
work-around: roll forward AIJ using /UNTIL="<time>" qualifier Explanation: A fatal, unexpected error was detected by the database management system during the roll forward of an AIJ file. However, one or more transactions were successfully rolled forward up to the date indicated in the message. Using the /UNTIL qualifier on the roll-forward command produces a database that is transaction consistent up to the indicated date. User Action: Issue the AIJ roll-forward command using the indicated /UNTIL qualifier.
40.1088 – RECVERDIF
Record at dbkey <num>:<num>:<num> in table "<str>" version <num> does not match current version Explanation: The specified record in the after-image journal was found with a record version number that is not the same as the current highest version of the record in the database metadata. This may be caused by changes to the table definition while the LogMiner is running or during the span of the after-image journal that is being unloaded. User Action: The LogMiner does not handle different versions of table metadata during an unload operation. The correct sequence of events when using the LogMiner and making table metadata changes is to unload all of the AIJ files, then shutdown the LogMiner (if needed), make the metadata changes and then re-start unloading form the after-image journal. This sequence ensures that the LogMiner always processes records that are of the current metadata version.
40.1089 – REFSYSREL
table <str> references a system relation field <str> and can not be exported Explanation: A table cannot be exported when it references a system relation field User Action: Remove the unsupported usage and try again, or compensate for the warning by editing the output file from the RMU Extract command. The preferred action is to remove the unsupported usage, even if you decide not to use the RMU Extract command.
40.1090 – RELMAXIDBAD
ROLLING BACK CONVERSION - Relation ID exceeds maximum <num> for system table <str>. Explanation: New relation ids cannot be assigned to system tables since the maximum database relation id value has been exceeded. User Action: Please contact your Oracle support representative.
40.1091 – RELNOTFND
Relation (<str>) not found Explanation: The specified relation or view was not found. User Action: Correct the relation or view specification.
40.1092 – REQCANCELED
request canceled Explanation: The executing request was canceled. This can occur if a query limit was specified and exceeded, or the request was canceled by an external source such as the RMU or DBO /SHOW STATISTICS utility.
40.1093 – RESINCOMP
Not all storage areas have been restored, the database may be corrupt. Explanation: One or more storage areas could not be restored. User Action: You may be able to attach to the restored database and access some of your storage areas. Make sure you have specified all tape save sets or directories and/or all rbf files that were specified in the backup command. Use RMU/VERIFY to find any missing or incomplete storage areas. Repeat the restore command including all tape save sets or directories and/or all rbf files that were specified in the backup command.
40.1094 – RESTART
restarted recovery after ignoring <num> committed transaction(s) Explanation: The specified number of committed transactions did not apply to this database root. All subsequent transactions were applied. User Action: None.
40.1095 – RMUNOTSUC
RMU command finished with exit status of <num>. Explanation: An RMU command that was executing in a subprocess exited with a exit status indicating non-success.
40.1096 – RMUSIG
RMU command terminated with signal number <num>. Explanation: An RMU command that was executing in a sub-process was terminated with the specified signal number.
40.1097 – ROOCKSBAD
root "<str>", contains an invalid checksum expected: <num>, found: <num> Explanation: The checksum on the root file is incorrect. Verification of the root continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1098 – ROOERRORS
<num> error(s) encountered in root verification Explanation: Errors found while verifying files associated with the root, as well as the root file itself. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1099 – ROOMAJVER
root major version is incompatible with the software version Explanation: Your database was created with an incompatible version of the software. User Action: Your database cannot be used with the version of the software you have installed on your machine.
40.1100 – ROOTADINV
root "<str>", contains incorrect time stamp expected between <time> and <time>, found: <time> Explanation: The time stamp on the root file specifies a time later than the current time or earlier than the time at which the database could have possibly been created. Such a time is incorrect. Verification of the root continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1101 – ROOTADZER
root "<str>", contains zero time stamp Explanation: The time stamp on the root file is zero, that is, 17-NOV-1858 00:00:00.00. Verification of the root continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1102 – ROOTMAJVER
database format <num>.<num> is not compatible with software version <num>.<num> Explanation: Your database was created with an incompatible version of the software. User Action: Your database cannot be used with the version of the software you have installed on your machine.
40.1103 – ROOT_CORRUPT
database has been corrupted and must be restored from backup
Explanation: The database has been corrupted and must be
restored from a full database backup.
User Action: Restore the database from the latest full database
backup, apply any incremental backups that might exist, and
roll-forward ("recover") the corresponding after-image journal.
40.1104 – ROOVERLAP
root block <num> is multiply allocated to data structures "<str>" and "<str>" Explanation: A blocks in the database root file is assigned to more than one root file data structure. User Action: Restore database from backups and recover the database from journals.
40.1105 – RRDFILRQD
The File keyword is required with the Record_Definition qualifier . Explanation: The Record_Definition qualifier must include the File option and a file specification. This file receives the record definition for the unloaded data. User Action: Reissue the command, specifying a file to receive the record definition. Specify a file name with the File option to the Record_Definition qualifier.
40.1106 – RRDFPTHRQD
Either the File or Path option must be specified with the Record_Definition qualifier. Explanation: The Record_Definition qualifier must include either the File or Path option, but not both. The file (or path) contains the description of the data to be loaded. User Action: Reissue the command using either the File option to specify the file name or the Path option to specify the repository path name.
40.1107 – RTAIJMSMTCH
AIJ references root file "<str>" - expected "<str>"
40.1108 – RTNERR
Call to routine <str> failed Explanation: A call to the routine failed with the additional error message.
40.1109 – RUJDEVDIR
RUJ filename "<str>" does not include a device/directory Explanation: The RUJ filename you specified did not include a device and directory. User Action: For maximum protection, you should always include a device and directory in the file specification, preferably one that is different from the database device.
40.1110 – RUJTOOBIG
RUJ file size may not exceed 8,000,000 disk blocks Explanation: The transaction attempted to extend the RUJ file size beyond 8 million disk blocks. This transaction will be rolled back (note that due to the size of the RUJ file, the rollback operation may take a very long time). User Action: Reduce the number of records being modified by the transaction; commit more often or use a BATCH UPDATE transaction.
40.1111 – SAMEAREANAME
storage area name <str> is already in use Explanation: The storage area name that is being created is already existing. An attempt has been made to create another storage area with the same name. User Action: Use different storage area name to avoid conflict.
40.1112 – SAMROOTMATCH
identical root file "<str>" specified Explanation: The specified master and replicated database root file names are identical; this is not allowed. User Action: Specify the rootfile name of a replicated database that was created from the backup of the master database.
40.1113 – SCRIPTFAIL
Network error: Error executing script file. Explanation: The executor encountered an error executing the specified script file. User Action: Examine the secondary message or messages. Correct the error and try again.
40.1114 – SCRNOTFOUND
specified screen could not be found Explanation: The specified screen name was not found in the SHOW STATS utility. User Action: Check the spelling, or use the menu-based screen selection option of the Notepad facility.
40.1115 – SEGNOTPRM
<str>, page <num>, line <num> Primary segment not found for segmented string. Explanation: The segment identified in the error message should be the primary segment of a segmented string, but a secondary segment was found. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1116 – SEGNULEND
Finished search with <num> fields changed to NULL. Explanation: The RMU Repair command with the Worm_Segments command has finished looking for missing segmented strings.
40.1117 – SEGRECDBK
Data record is at logical dbkey <num>:<num>:<num>. Explanation: This message is used when dumping segmented string context after a possible corruption is found. It reports the logical dbkey of the data record whose segmented strings are currently being verified. User Action: This message is informational. No action is required.
40.1118 – SEGSETNU2
Dbkey of parent record is <num>:<num>:<num>. Explanation: This is the dbkey of the parent record of a segmented string that has been set to NULL.
40.1119 – SEGSETNUL
Segmented string at <num>:<num>:<num> cannot be accessed. Explanation: The segmented string with the specified dbkey cannot be accessed and will be set to null.
40.1120 – SEGSHORT
Segmented string fragment is too short. Explanation: The primary fragment of a chained segmented string is too short to contain the pointer to the next segment in the chain. A segment of a segmented string will never be stored in a fragment that small. Subsequent messages will give the dbkey of the segment. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement, and verify the database again.
40.1121 – SEGSTRDBK
Segmented string is at logical dbkey <num>:<num>:<num>. Explanation: This message is used when dumping segmented string context after a possible corruption is found. It reports the logical dbkey of the first segment of the segmented string currently being verified. User Action: This message is informational. No action is required.
40.1122 – SEQINVAL
client sequence id <num> does not have a valid value Explanation: The sequence with id indicated, has the next value in the root file which is either > the max value for the sequence or < min value.
40.1123 – SEQTBLFUL
sequence table is full Explanation: An attempt was made to create a sequence but no room remains in the Rdb root file for further sequence definitions. Initially only 32 sequences can be created in a new or converted database. User Action: Use the ALTER DATABASE statement to increase the size of the sequences table with the RESERVE ... SEQUENCES clause. Note that the value entered will be rounded to the next highest multiple of 32 so that a full page in the root file is used.
40.1124 – SETSOCKOPTERR
Network error: Error on setsockopt call. Explanation: An error was encountered on the Digital UNIX setsockopt system service call. User Action: Contact your Oracle support representative for assistance.
40.1125 – SETWIDTH
error setting width of terminal Explanation: An error occurred during the parsing of a file name. User Action: Examine the secondary message for more information.
40.1126 – SEVERRDET
a severe error was detected Explanation: A severe error was detected by RMU and displayed during the execution of an RMU statement. User Action: If possible, run the statement(s) again using RMU, and read any additional RMU error messages to determine what caused the severe error condition. Then fix the error.
40.1127 – SFDBMOV
A single-file database requires the "ROOT=file" qualifier Explanation: The RDB$SYSTEM area of a single-file database cannot be moved independently of the root file. User Action: Either use the OpenVMS COPY command to move a single-file database, or specify the Root qualifier with the RMU Move_Area command.
40.1128 – SHORTBLOCK
backup file block too short Explanation: Media or device error on backup file resulted in an incomplete read of a block of data. User Action: None.
40.1129 – SIP
transaction is a snapshot transaction Explanation: You have already started a transaction that is a snapshot transaction. User Action: Use READY BATCH RETRIEVAL to ready the area for the snapshot transaction or use COMMIT to terminate the snapshot transaction.
40.1130 – SKIPLAVFY
logical area <str> not verified Explanation: An error occurred that prevents verification of the logical area. User Action: Correct the error with the RMU Restore command, and verify the database again.
40.1131 – SNAPFULL
snapshot area too full for operation Explanation: You attempted to store a record in the database, because there was an active reader and the snapshot area in which the record would go is too full. User Action: Modify the snapshot area extend parameter to allow snapshot area extension.
40.1132 – SNPAREALV
The snapshot area for live area <str> (FILID entry <num>) is a live area. Explanation: The FILID entry for each live area contains a pointer to the FILID entry for the snapshot area associated with the live area. The snapshot area associated with the named live area is not a snapshot area. User Action: Restore and recover the database from backup.
40.1133 – SNPBADARE
Snapshot page for area <str>, page <num> maps incorrect storage area. Expected: <num>, found: <num> Explanation: The storage area ID number on the snapshot page is not the storage area ID number of the corresponding live area. Verification of the page continues. User Action: Correct the error with the RMU Repair command with the Initialize=Snapshots qualifier.
40.1134 – SNPCKSBAD
page for snapshot area <str>, page <num> contains an invalid checksum. Expected: <num>, found: <num>. Explanation: The checksum on the snapshot page is incorrect. Verification of the page continues. User Action: Correct the error with the RMU Set Corrupt_Pages command or the RMU Repair command with the Initialize=Snapshots qualifier.
40.1135 – SNPCOUPLD
the COUPLED flag is set indicating that this is the live area of a single file database, but the SNAP flag is also set for a SNAPSHOT, contradicting the previous flag Explanation: The COUPLED flag is set indicating that this is the live area of a single-file database, but the SNAP flag is also set for a SNAPSHOT, contradicting the previous flag. Verification of the FILID continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1136 – SNPFILERS
<num> error(s) validating SNAP file Explanation: Errors found while validating a SNAP file. User Action: Restore a backup of your database.
40.1137 – SNPPAGRAN
Snapshot page for area <str>, page <num> page number out of range. Expected: <num>, found: <num> Explanation: The page number on the snapshot page is not within the range of valid page numbers for the storage area as stated in the storage schema. Verification of the page continues. User Action: Correct the error with the RMU Repair command with the Initialize=Snapshots qualifier.
40.1138 – SNPPAGSEQ
Snapshot page for area <str>, page <num> page number out of sequence. Expected: <num>, found: <num> Explanation: The page number on the snapshot page is not one greater than the preceding page number. Verification of the page continues. User Action: Correct the error with the RMU Repair command with the Initialize=Snapshots qualifier.
40.1139 – SNPSHTWRM
<str> is a snapshot area and cannot be a WORM area. Explanation: The SNAPS flag is set indicating that this is a snapshot area, but the WORM flag is set; this setting contradicts the previous flag, because snapshot areas cannot be WORM areas. Verification of the FILID continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement, and verify the database again.
40.1140 – SNPTADINV
Snapshot page for area <str>, page <num> contains incorrect time stamp. Expected between <time> and <time>, found: <time> Explanation: The time stamp on the snapshot page specifies a time later than the time that the verification began. Such a time is incorrect. Verification of the page continues. User Action: Correct the error with the RMU Repair command with the Initialize=Snapshots qualifier.
40.1141 – SNPTADZER
snapshot page for area <str>, page <num> contains zero time stamp/ Explanation: The time stamp on the page is zero, that is, 17-NOV-1858 00:00:00.00. Verification of the page continues. User Action: Correct the error with the RMU Repair command with the Initialize=Snapshots qualifier.
40.1142 – SOCKETERR
Network error: Socket error. Explanation: An error was encountered on the Digital UNIX socket system service call. User Action: Contact your Oracle support representative for assistance.
40.1143 – SORTOPERR
a VMS SORT/MERGE operation was unsuccessful Explanation: A VMS SORT/MERGE operation completed unsuccessfully. See the secondary message for information about what operation failed. User Action: Fix the VMS SORT/MERGE problem, and try again.
40.1144 – SPAMFRELN
area <str>, page <num> error in space management page's free space length expected: <num>, found: <num> Explanation: The space management page's locked free space count is different than what it should be. Because no data is stored on this page the free space count should remain constant. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1145 – SPAMLOKLN
area <str>, page <num> error in space management page's locked free space length expected: <num>, found: <num> Explanation: The space management page's locked free space count is different than what it should be. Because no data is stored on this page the locked free space count should remain constant at zero. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1146 – SPAMNOTDIS
cannot disable SPAMs for uniform area <str> Explanation: Only mixed-format areas can have SPAMs enabled/disabled. User Action: Do not enable/disable SPAMs for uniform-format areas.
40.1147 – SPAMNOTRDONLY
cannot enable SPAMs for READ_ONLY area <str> Explanation: Read-only areas cannot be modified to have SPAMs enabled, because this involves rebuilding the SPAM pages. User Action: Change the area to be read write.
40.1148 – SPAMNOTWRM
cannot enable SPAMs for WORM area <str> Explanation: WORM areas cannot have SPAMs enabled. User Action: Do not enable SPAMs for WORM areas.
40.1149 – SPAMNZERO
area <str>, page <num> unused space on this space management page is not empty Explanation: After the space management page's array of data page entries, the rest of the page should be empty (zero). User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1150 – SPMERRORS
<num> error(s) verifying SPAM pages for <str> storage area Explanation: Errors found while verifying spam pages. User Action: Restore a backup of your database.
40.1151 – SRVRSHUTDOWN
Server process is being shut down Explanation: The Server process is currently being shut down, but has not yet terminated. User Action: Issue the server startup command later.
40.1152 – SSNOTINROOT
<str> is not in the root file Explanation: The DDCB you specified is not in the root file. You can see which DDCBs a root file has by issuing the DBO/DUMP command. User Action: Add the DDCB to your root file (DBO/MODIFY), or check your bind sequence and try again.
40.1153 – STALL
asynchronous operation requires a stall Explanation: The operation has not completed yet. User Action: Check the event flag and I/O status block for final completion, and contact your Oracle support representative for assistance.
40.1154 – STAREAFUL
storage area <str> is full Explanation: You attempted to store a record in the database, but the storage area in which the record would go is full. This condition can be caused by the storage area being set to disallow being extended or when the ABM pages are at their limit and cannot map a new extension for the storage area. User Action: Modify the storage area extend parameter to allow storage area extension if it is disabled, or increase the page size or redistribute objects across other storage areas to free up existing space.
40.1155 – STATBUFSML
Statistics API: Output buffer too small. Explanation: A tag-length-value set is too large for the output buffer for a statistics API request. User Action: Allocate a larger output buffer for returned data.
40.1156 – STATNODEACTV
node is already actively collecting statistics Explanation: A node was specified that is already actively collecting statistics for this SHOW STATISTIC utility session. User Action: Make sure the node name is spelled correctly and has been properly identified.
40.1157 – STATNODEUNKN
node is not actively collecting statistics Explanation: A node was specified that is NOT actively collecting statistics for this SHOW STATISTIC utility session. User Action: Make sure the node name is spelled correctly and has been properly identified.
40.1158 – STATNOMATCH
no logical area names match specified wildcard pattern
Explanation: No logical areas (tables, indexes, etc) match the
specified wildcard pattern. Possibly the wildcard characters
("*" and/or "%") were not specified, which results in an "exact
match" pattern.
User Action: Use a different wildcard pattern. Remember to use
the "*" for "zero or more" and "%" for "exactly one".
40.1159 – STBYDBINUSE
standby database cannot be exclusively accessed for replication Explanation: There are one or more application processes or database servers accessing the standby database. User Action: Make sure there are no active application processes or database servers accessing the standby database, on any node of the cluster.
40.1160 – STTREMCNCT
error allocating remote statistics network connection Explanation: None.
40.1161 – STTSVRFIND
error identifying remote statistics server Explanation: None.
40.1162 – SWOPNNOTCOMP
database is open on another node with incompatible software Explanation: Incompatible Rdb software exists in this OpenVMS Galaxy system and is attempting to open a database in a Galaxy shared environment. Identical versions of Rdb are required in order to access a database from multiple nodes in an OpenVMS Galaxy environment.
40.1163 – SYNTAX
syntax error near "<str>" Explanation: A syntax error was detected in the command input stream. User Action: Correct the error and try again.
40.1164 – SYNTAXEOS
syntax error at end of statement Explanation: A syntax error was detected in the command input stream. User Action: Correct the error and try again.
40.1165 – SYSRDONLY
write access is not allowed if RDB$SYSTEM is read-only Explanation: BATCH UPDATE and EXCLUSIVE UPDATE access do not update the snapshot files. The fact that snapshots are not being maintained is recorded in the RDB$SYSTEM area. Hence, RDB$SYSTEM may not be READ_ONLY. User Action: Use another UPDATE access mode, or change RDB$SYSTEM to be READ WRITE.
40.1166 – SYSRECCPC
Area <str>, page <num>, line <num> pointer cluster <num> in this system record is corrupt. Explanation: When expanding the hash bucket and system record pointers in the given pointer cluster, an attempt was made to fetch data beyond the pointer cluster structure. The system record is probably corrupt. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1167 – SYSRECDYN
Area <str>, page <num>, line <num> Pointer cluster <num> not found in this system record. Explanation: Expected to find a pointer cluster structure, but did not. The system record is probably corrupt. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1168 – SYSRECHKY
Area <str>, page <num>, line <num> inconsistent hash index DBIDs in pointer cluster <num>. Hash index DBID in system record is <num> (hex). Hash index DBID in hash bucket dbkey is <num> (hex). Explanation: The ID which is stored in a pointer cluster should be the same as the logical area ID of the index, because the pointers in the cluster point to hash index nodes. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1169 – SYSRECID
area <str>, page <num>, line <num> non-system storage record type on line 0 expected: <num>, found: <num> Explanation: Line 0 of every database page must contain a SYSTEM record. This page did not. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1170 – SYSRECNPC
Area <str>, page <num>, line <num> number of hash buckets field in system record does not match the number of hash buckets verified. Expected: <num>, found: <num>. Explanation: The number of hash buckets field in the system record was not consistent with the number of hash buckets found and verified. The system record is probably corrupt. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1171 – SYSRECOKY
Area <str>, page <num>, line <num> expanded owner dbkey in PCL <num> is not the system record dbkey. Expanded owner logical dbkey is <num>:<num>:<num>. System record logical dbkey is <num>:<num>:<num>. Explanation: The system record dbkey expanded from the given pointer cluster should be that of the system record currently being verified. The system record is probably corrupt. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1172 – SYSRECPCL
Area <str>, page <num>, line <num> the length, <num>, of PCL <num> in this system record is greater than expected. Explanation: The length of the given pointer cluster structure is greater than expected. The system record is probably corrupt. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1173 – SYSRECPPL
Area <str>, page <num>, line <num> this system record's pointer portion length field is corrupt. Expected: <num>, found: <num>. Explanation: The number of bytes of pointer cluster structures field in the system record is not consistent with the actual number of bytes found. This system record is probably corrupt. User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1174 – SYSRECZER
Area <str>, page <num>, line <num> no PCLs found within this system record. Explanation: No pointer cluster structures were found in the system record, but the system record is not empty. The system record is probably corrupt (it should be empty). User Action: Try correcting the error with the RMU Restore command or the SQL IMPORT statement. Follow-up with another verification of the database.
40.1175 – TABLMORE1
Only one table can be specified for a column group. Explanation: If a column group is specified, only the table for that column group can be specified. User Action: Correct the error and try again.
40.1176 – TABNOTFND
Table <str> is not found in this database. Explanation: The specified table does not exist in the database. User Action: Correct the table name and try again.
40.1177 – TADMISMATCH
journal is for database version <time>, not <time> Explanation: The version time and date stamp in the root does not match the version time and date stamp in the journal file. This journal cannot be applied to this database. User Action: Use the correct journal file or backup file.
40.1178 – TAPEACCEPT
Tape label <str> preserved on <str> at <time> Explanation: The ACCEPT_LABELS qualifier was specified and a tape volume is being written in such a way as to preserve its label.
40.1179 – TAPEFULL
<str> is full Explanation: Tape is positioned at the End Of Tape. A new backup file can not be written on this tape. User Action: Use another tape or reinitialize this one.
40.1180 – TAPEQUAL
Qualifier only valid for tape devices "<str>" Explanation: A qualifier that is valid only for tape devices, is being used with a device that is not a tape, or with a tape that is not mounted with the DCL /FOREIGN qualifier. User Action: Do not use this qualifier, or make sure that the tape is mounted with the DCL /FOREIGN qualifier before executing the RMU command.
40.1181 – TBLSPCTWC
Table "<str>" specified more than once
40.1182 – TERMINATE
database recovery failed -- access to database denied by monitor Explanation: To maintain the integrity of the database, the monitor forced your image exit because an unrecoverable error has been detected. User Action: Look for a file named SYS$SYSTEM:*DBRBUG.DMP. This is a DataBase Recovery process bugcheck dump. Search the file for a string of five asterisks (*****) using the SEARCH/WINDOW command. You will see a line with a format similar to this: ***** Exception at <address> : <database module name> + <offset> %facility-severity-text, <error text> The exception line will be followed by one or more additional errors that will help you to determine what caused the recovery process to fail. Possible causes include: low quotas, missing Recovery-Unit Journal (RUJ) files, or filename logicals misdefined or undefined. Depending on the cause of the problem, take the appropriate action. If you are uncertain of what to do, contact your Oracle support representative for assistance.
40.1183 – TIMEOUT
timeout on <str> Explanation: A lock request has been canceled by the database management system because the request could not be granted within the user-specified timeout period. User Action: Execute a ROLLBACK or a COMMIT to release your locks, and try the transaction again.
40.1184 – TOKTOOBIG
token "<str>" is too long Explanation: A token that is longer than the allowed number of characters was detected in the command input stream. User Action: Correct the error and try again.
40.1185 – TOMANYWRITERS
The specified number of output parameters exceeds the number of storage areas. Explanation: The number of output parameters specified exceeds the number of storage areas in the database to be backed up. This would cause backup files to be written that contained no data. User Action: Re-enter the backup command specifying a number of output parameters which does not exceed the number of storage areas to be backed up.
40.1186 – TOOFEWENT
Interior b-tree node at level <num> at logical dbkey <num>:<num>:<num> has 0 entries. Explanation: B-tree nodes above level one must have a fanout of at least 1. The specified b-tree node has 0 entries.
40.1187 – TOOMANYEXECS
<num> executors were requested, but only <num> executors will be used. Explanation: For a parallel load operation, the user requested more executors than there area storage areas for the table. Only one executor per storage area is used. User Action: Do not specify more executors than there are storage areas within the table.
40.1188 – TOOMANYVALS
too many option values specified Explanation: You specified too many values for an option. User Action: Correct the error and try again.
40.1189 – TRAN_IN_PROG
transaction in progress Explanation: You attempted an operation that is allowed only when you have no transaction in progress. User Action: Complete your transaction by executing a COMMIT or ROLLBACK.
40.1190 – TRMTYPUNK
terminal type is unknown or not supported Explanation: The terminal type is either unknown or unsupported. User Action: Utilize a support terminal type. If the terminal type is unknown to the operating system, a SET TERMINAL/INQUIRE command may help.
40.1191 – TRUE
condition value is true Explanation: None.
40.1192 – TRUNCATION
data conversion truncation error Explanation: You attempted an operation that would cause loss of information on a data-item movement. User Action: Correct the error and try again.
40.1193 – TSNLSSMIN
value (<num>:<num>) is less than minimum allowed value (<num>:<num>) for <str>. Explanation: The TSN value you specified for the named option is too small. User Action: Use a value that is greater than the minimum value and try again.
40.1194 – TSNMISMATCH
cannot synchronize database due to transaction commit mismatch Explanation: Attempting to synchronize the master and replicated databases failed because the last commit transaction sequence numbers in the database do not match exactly. User Action: Restart the database replication operation.
40.1195 – TSNSALLAREA
/INIT_TSNS can only be specified for all storage areas Explanation: RMU/REPAIR/INITIALIZE=TSNS cannot be specified with a specific list of storage areas since TSNs must be initialized on pages in all areas of the database. User Action: Respecify the command for all storage areas (the default).
40.1196 – TYPENOTTABLE
Logical area <num>, <str>, is not of type table - /LENGTH cannot be used Explanation: Action "/LENGTH" with no value can not be performed on a logical area of type "TABLE". User Action: Correct the error and try again.
40.1197 – UNCORTAIL
Pages <num>-<num> in area <str> have an uncorrectable logical area assignment Explanation: The affected pages should all be assigned to the same logical area, but they are not. User Action: You can correct the problem by performing either an RMU RESTORE operation or an SQL IMPORT operation on the database. The problem can also be manually corrected with the RMU ALTER utility.
40.1198 – UNDERFLOW
data conversion underflow Explanation: You attempted an operation that would cause loss of information on a data-item movement. User Action: Correct the error and try again.
40.1199 – UNEXPAPIERR
Statistics API: Unexpected error. Explanation: An unexpected error has occurred during a call to the statistics API. User Action: Check that the error is not due to bad data being passed to the statistics API and that calls to the statistics API are being used correctly. If the problem is not in the client application, contact your Oracle support representative for assistance.
40.1200 – UNEXPDELIM
Unexpected delimiter encountered (<str>) in row <num> of input Explanation: A column of data has an unexpected delimiter embedded within it. User Action: Correct the input file, and reissue the command.
40.1201 – UNEXPEOL
Unexpected end of line encountered in row <num>. Explanation: A line of input does not have enough data. User Action: Correct the input file and reissue the command.
40.1202 – UNEXPEXECTERM
Unexpected termination by executor <str> (exit code = <num>.) Explanation: An executor process unexpectedly terminated prior to the completion of the parallel operation. User Action: Look at the last error message issued by the executor to determine the cause of the termination.
40.1203 – UNFREEZE
database freeze over Explanation: A database freeze or a cluster failover was in effect, but is no longer. User Action: None.
40.1204 – UNIFORMBLOCKS
BLOCKS PER PAGE cannot be changed for uniform storage area <str> Explanation: The number of BLOCKS PER PAGE can only be changed for mixed storage areas and this storage area is a uniform storage area. User Action: None - the BLOCKS PER PAGE setting will be changed for the mixed storage areas but not for the uniform storage areas.
40.1205 – UNKCSETID
Unknown character set id !SD Explanation: The RDB$FIELD_SUB_TYPE contained a value which is not supported by Oracle Rdb. User Action: No action is required.
40.1206 – UNKCSNAME
unknown collating sequence <str> Explanation: The collating sequence referenced by the field could not be fully translated. User Action: No action is required.
40.1207 – UNKDTYPE
unknown data type <num> Explanation: The RDB$FIELD_TYPE contained a value which is not supported by Oracle Rdb. User Action: No action is required.
40.1208 – UNKEXTRTN
Unexpected error trying to verify external routine <str>. Explanation: An unexpected error occurred while trying to verify the specified external routine. A second error message is displayed indicating the type of error that occurred. User Action: Delete and redefine the external routine, and try the verify operation again. If the error persists, contact your Oracle support representative for assistance.
40.1209 – UNKHASHTYPE
Unexpected HASH ALGORITHM value of <num> for index <str> - ignored Explanation: The HASH ALGORITHM value specified for this hashed index is not supported for the RDO language or is illegal for the SQL language. User Action: Specify the Language=SQL qualifier with the RMU Extract command if this error occurs when the Language=RDO qualifier is specified with the RMU Extract command. If this error occurs when the Language=SQL qualifier is specified, then contact your Oracle support representative for assistance.
40.1210 – UNKIDXFLG
unknown value (<num>) for RDB$FLAGS in <str> Explanation: The value of RDB$FLAGS is not within the expected range. User Action: No Action is required.
40.1211 – UNKIDXVAL
unknown value (<num>) for RDB$UNIQUE_FLAG in <str> Explanation: During an attempt to decode RDB$UNIQUE_FLAG, an unknown value was detected. User Action: No action is required.
40.1212 – UNKLANGUAGE
External routine <str> references an unknown language. Explanation: An unsupported external language has been specified for this external routine. User Action: Redefine the routine, specifying a valid external language.
40.1213 – UNKN_ABS
unknown AIJ backup server process -- ABS image not invoked by database monitor Explanation: An attempt to bind to the database was made by an after-image backup server process (ABS) that was not created by the database monitor; this would happen if the ABS image was invoked from DCL by the user instead of the monitor. User Action: DO NOT attempt to execute the ABS image from DCL. If so configured, the database monitor will automatically invoke the after-image backup server process to perform after-image journal backup operations.
40.1214 – UNKN_ALS
unknown AIJ Log Server -- ALS image not invoked by database monitor Explanation: An attempt to bind to the database was made by an AIJ Log Server process (ALS) that was not created by the database monitor; this would happen if the ALS image was invoked from DCL by the user instead of the monitor. User Action: DO NOT attempt to execute the ALS image from DCL. If so configured, the database monitor will automatically invoke the after-image logging server process to perform database journaling activities.
40.1215 – UNKN_DBR
unknown database recovery process -- DBR image not invoked by database monitor Explanation: An attempt to bind to the database was made by a database recovery process (DBR) that was not created by the database monitor; this would happen if the DBR image was invoked from DCL by the user instead of the monitor. User Action: DO NOT attempt to execute the DBR image from DCL. The database monitor will invoke the database recovery process to perform database recovery.
40.1216 – UNKN_LCS
unknown AIJ Log Catch-Up Server -- image not invoked by database monitor Explanation: An attempt to bind to the database was made by an AIJ Log Catch-Up Server process that was not created by the database monitor; this would happen if the server image was invoked from DCL by the user instead of the monitor. User Action: DO NOT attempt to execute the server image from DCL. Use the appropriate startup syntax to invoke the server image.
40.1217 – UNKN_LRS
unknown AIJ Log Roll-Forward Server -- image not invoked by database monitor Explanation: An attempt to bind to the database was made by an AIJ Log Roll-Forward Server process that was not created by the database monitor; this would happen if the server image was invoked from DCL by the user instead of the monitor. User Action: DO NOT attempt to execute the server image from DCL. Use the appropriate startup syntax to invoke the server image.
40.1218 – UNKN_RCS
unknown Record Cache Server -- RCS image not invoked by database monitor Explanation: An attempt to bind to the database was made by a Record Cache Server process (RCS) that was not created by the database monitor; this would happen if the RCS image was invoked from DCL by the user instead of the monitor. User Action: DO NOT attempt to execute the RCS image from DCL. If so configured, the database monitor will automatically invoke the Record Cache Server process to perform database journaling activities.
40.1219 – UNKPERCFILL
Unexpected PERCENT FILL value of <num> for index <str> - ignored Explanation: The PERCENT FILL value is a percentage, but this value falls outside the range 0 to 100. User Action: Contact your Oracle support representative for assistance.
40.1220 – UNKSUBTYPE
unknown segmented string sub-type <num> Explanation: This sub-type is user-defined. User Action: No action is required.
40.1221 – UNLAIJCB
Unloading table <str> to <str><str>
40.1222 – UNLAIJFL
Unloading table <str> to <str>
40.1223 – UNLFORMAT
Incompatible organization in the input file Explanation: This file either was not created by the RMU Unload command, or it has been corrupted. User Action: Create another UNL file or recover this one from a backup.
40.1224 – UNLTEMPTAB
Data cannot be unloaded from a temporary table. Explanation: A temporary table cannot be specified for the RMU Unload command. User Action: Check that the table is not defined in the database as a global or local temporary table. The table must be defined as a non-temporary table to be able to unload the table's data.
40.1225 – UNRECDBSERR
Network error: Unrecognized DBS error. Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.1226 – UNSARITH
expression includes unsupported arithmetic operation Explanation: The Boolean evaluator was processing an expression or sub-expression that contained an arithmetic operator. Arithmetic operators are not supported. User Action: Rewrite the expression in error without the arithmetic operator.
40.1227 – UNSBIGINT
BIGINT not supported in ANSI SQL - converted to DECIMAL(18) for <str>.<str> Explanation: ANSI SQL does not support a BIGINT (SIGNED QUADWORD) data type, so this definition cannot be represented in ANSI SQL. User Action: You may decide to alter the definition so that it can be represented in ANSI SQL or use another language to extract the definition.
40.1228 – UNSCASE
CASE expression not supported in RDO - ignored for <str>.<str> Explanation: RDO does not support the CASE expression, so this definition cannot be represented in RDO. User Action: You may decide to alter the definition so that it can be represented in RDO or use the Language=SQL qualifier of the RMU Extract command to extract the definition.
40.1229 – UNSCOMP
unsupported data comparison Explanation: You attempted an operation that would compare two incommensurate data items. User Action: Correct the error and try again.
40.1230 – UNSCOMPUTEDBY
COMPUTED BY not supported in ANSI SQL - ignored for <str>.<str> Explanation: This definition cannot be represented in ANSI SQL, because ANSI SQL does not support the COMPUTED BY function. User Action: You may decide to alter the definition so that it can be represented in ANSI SQL or use another language to extract the definition.
40.1231 – UNSCONV
unsupported data conversion Explanation: You attempted an operation that would cause loss of information on a data-item movement. User Action: Correct the error and try again.
40.1232 – UNSDEF2VALUE
DEFAULT clause not supported in RDO - ignored for <str>.<str> Explanation: RDO does not support the DEFAULT clause, so this definition cannot be represented in RDO. User Action: You may decide to alter the definition so that it can be represented in RDO or use the Language=SQL qualifier of the RMU Extract command to extract the definition.
40.1233 – UNSFLDUSE
Unsupported usage of global field <str> in relation <str> Explanation: System global fields should not be used in user-defined relations. User Action: No action is required. However, it would be prudent to correct this and to avoid using system global fields in user-defined relations in the future.
40.1234 – UNSFUNCT
External routines are not supported in RDO. Explanation: RDO does not support external routines, so external routines cannot be represented in RDO. User Action: Use the Language=SQL qualifier of the RMU Extract command to extract the definition.
40.1235 – UNSMISSVALUE
MISSING_VALUE clause not supported in SQL - ignored for <str>.<str> Explanation: SQL does not support the MISSING VALUE clause, so this definition cannot be represented in SQL. User Action: You may decide to alter the definition so that it can be represented in SQL or use the Language=RDO qualifier of the RMU Extract command to extract the definition.
40.1236 – UNSMODULE
Stored procedures are not supported in RDO. Explanation: RDO does not support stored procedures, so stored procedures cannot be represented in RDO. User Action: Use the Language=SQL qualifier of the RMU Extract command to extract the definition.
40.1237 – UNSPOSITION
POSITION function not available in RDO - ignored for <str>.<str> Explanation: This definition cannot be represented in RDO, because RDO does not support the POSITION function. User Action: You may decide to alter the definition so that it can be represented in RDO or use the Language=SQL qualifier of the RMU Extract command to extract the definition.
40.1238 – UNSSCALE
numeric scale not supported in ANSI SQL - ignored for <str>.<str> Explanation: ANSI SQL does not support numeric scale for SMALLINT or INTEGER, so this definition cannot be represented in ANSI SQL. User Action: You may decide to alter the definition so that it can be represented in ANSI SQL or use another language to extract the definition.
40.1239 – UNSSUBSTR
SUBSTRING function not available in RDO - ignored for <str>.<str> Explanation: This definition cannot be represented in RDO, because RDO does not support the SUBSTRING function. User Action: You may decide to alter the definition so that it can be represented in RDO or use the Language=SQL qualifier of the RMU Extract command to extract the definition.
40.1240 – UNSSUPDAT
Unsupported data type: <num> Explanation: A column in this table uses an unsupported data type. User Action: Convert the column to a supported data type.
40.1241 – UNSTINYINT
TINYINT not supported in ANSI SQL - converted to SMALLINT for <str>.<str> Explanation: ANSI SQL does not support a TINYINT (SIGNED BYTE) data type, so this definition cannot be represented in ANSI SQL. User Action: You may decide to alter the definition so that it can be represented in ANSI SQL or use another language to extract the definition.
40.1242 – UNSTRANS
TRANSLATE function not available in RDO - ignored for <str>.<str> Explanation: This definition cannot be represented in RDO, because RDO does not support the TRANSLATE function. User Action: You may decide to alter the definition so that it can be represented in RDO or use the Language=SQL qualifier of the RMU Extract command to extract the definition.
40.1243 – UNSTRIM
TRIM function not available in RDO - ignored for <str>.<str> Explanation: This definition cannot be represented in RDO, because RDO does not support the TRIM function. User Action: Alter the definition so that it can be represented in RDO or use the Language=SQL qualifier of the RMU Extract command to extract the definition.
40.1244 – UNSVALIDIF
VALID IF clause not supported in SQL - ignored for <str>.<str> Explanation: SQL does not support the VALID IF clause, so this definition cannot be represented in SQL. User Action: You may decide to alter the definition so that it can be represented in SQL or use the Language=RDO qualifier of the RMU Extract command to extract the definition.
40.1245 – UNSVARCHAR
VARCHAR not support in ANSI SQL - converted to CHAR for <str>.<str> Explanation: ANSI SQL does not support a VARCHAR data type, so this definition cannot be represented in ANSI SQL. VARCHAR has been converted to CHAR of the same size. User Action: You may decide to alter the definition so that it can be represented in ANSI SQL or use another language to extract the definition.
40.1246 – UNSWITHOPTION
WITH CHECK OPTION not available in RDO - ignored for <str> Explanation: RDO does not support CHECK WITH OPTION so this constraint cannot be represented in RDO. User Action: You may decide to alter the definition so that it can be represented in RDO or use the Language=SQL qualifier of the RMU Extract command to extract the definition.
40.1247 – UNTILTAD
recovery /UNTIL date and time is "<time>" Explanation: The specified date and time are being used for the after-image journal roll-forward operation.
40.1248 – UNTSTR
unterminated character string <str>
Explanation: An unterminated character string has been detected
in the command input stream. Character strings must be wholly
contained on one command line and must be enclosed in quotes
(").
User Action: Correct the error and try again.
40.1249 – UPDACLERR
Error updating the ACL for <str> Explanation: The attempt to update the ACL failed. The reason for the failure is given in the secondary error message. User Action: Correct the source of the failure, and then update the ACL to the desired content.
40.1250 – UPDPRECRD
Updating sorted index prefix cardinalities as estimated values. Explanation: The RMU Analyze Cardinality command estimates index prefix cardinalities when actual sorted index cardinalities are updated. It then updates them in the RDB$INDEX_SEGMENTS system tables so that index prefix cardinalities will remain up-to-date.
40.1251 – UPDSINFULABRT
Updates have been performed since the last FULL RESTORE. Aborting the INCREMENTAL RESTORE to prevent possible database corruption. Explanation: For BATCH mode incremental restores where the user is not prompted to allow him to abort the restore operation if updates have been performed since the last FULL RESTORE the restore is aborted to avoid possible database corruption do to the updates conflicting with the incremental restore. User Action: Do the incremental restore using the /NOCONFIRM qualifier or do not do the incremental restore to avoid possible database corruption or restore the database from the last full restore, immediately do the incremental restore, and redo any database changes that are not in the incremental restore.
40.1252 – UPDSINFULWARN
Updates have been performed since the last FULL RESTORE. Consider verifying your database for possible database corruption. Explanation: Updates have been performed since the last full RESTORE. These updates may cause database corruption do to the updates conflicting with the incremental restore. User Action: Immediately verify the database if there is a possibility that the updates made since the last FULL RESTORE conflict with the changes made by the incremental restore.
40.1253 – USERCORRUPT
This database has been corrupted by bypassing the recovery process Explanation: RMU ALTER was used to enable access to this database without performing the required recovery procedure. The database may contain structural and logical inconsistencies. User Action: Restore this database from a backup file created before the corruption occurred.
40.1254 – USERECCOM
Use the RMU Recover command. The journals are not available. Explanation: This form of the RMU Restore command will not preserve the journal configuration. Consequently it can not recover from those journals. User Action: Use the RMU Recover command to perform the recovery from journal backups.
40.1255 – USEUNOPTAIJ
please use original unoptimized AIJ file for this recovery operation Explanation: The requested recovery operation is not compatible with an optimized AIJ file. See the accompanying message for the cause of the incompatability. User Action: Use the original, non-optimized AIJ file to do the recovery.
40.1256 – VALANDSTAR
value(s) and * are not allowed Explanation: A specified qualifier does not take a value. User Action: Correct the error and try again.
40.1257 – VALGTRMAX
value (<num>) is greater than maximum allowed value (<num>) for <str> Explanation: The value you specified for the named qualifier is too large. User Action: Use a value that is less than the maximum value and try again.
40.1258 – VALLSSMIN
value (<num>) is less than minimum allowed value (<num>) for <str> Explanation: The value you specified for the named qualifier is too small. User Action: Use a value that is greater than the minimum value and try again.
40.1259 – VERB_COMMIT
constraint <str> in table <str> will be evaluated at commit time Explanation: There is no conversion for 'check on update' to SQL. The constraint will be evaluated at commit time. User Action: None.
40.1260 – VERMISMATCH
Client version is incompatible with server. Explanation: Client application is not compatible with server software version. User Action: Install a version of the client application that is compatible with the server software version.
40.1261 – VFYALTIND
index is from the database's previous version metadata. Explanation: The index currently being verified is part of the database's previous version metadata. See the accompanying message for the name of the index. This message should only appear when verifying a database converted using the RMU Convert command with the Nocommit qualifier.
40.1262 – VFYALTREL
logical area is from the database's previous version metadata. Explanation: The logical area currently being verified is part of the database's previous version metadata. See the accompanying message for the name of the logical area. This message should only appear when verifying a database converted using the RMU Convert command with the Nocommit qualifier.
40.1263 – VIEWNOVER
views cannot be verified Explanation: You have attempted to verify a view. User Action: Specify a table name.
40.1264 – VRPBADCOLS
Invalid column list for vertical partition <num> for table <str>. Explanation: The column list for the vertical record partition had an unrecognizable format.
40.1265 – VRPDBKMIS
No reference for vertical partition <num> found in primary segment in dbkey <num>:<num>:<num>. Explanation: The primary partition has an array of dbkeys for every vertical partition of the record. Each dbkey is tagged with the number of the partition it represents. This error indicates that no dbkeys are tagged with the specified partition number.. User Action: Restore and recover the page of the primary dbkey from backup.
40.1266 – VRPDUPDEF
Duplicate default vertical partitions for table <str>. Explanation: More than one default vertical record partition (partitions with a NULL column list) were found for the specified table.
40.1267 – VRPPRISEG
Primary segment for vertical partition is at dbkey <num>:<num>:<num>. Explanation: Identifies the primary VRP segment for several messages.
40.1268 – WAITIDLEBUF
wait attempted on idle buffer Explanation: This indicates an internal logic error. User Action: Contact your Oracle support representative for assistance.
40.1269 – WAITOFF
Waiting for offline access to <str> Explanation: The database is either opened or otherwise in use, and the offline operation can not proceed until this access is terminated. If the database remains inaccessible for two minutes, the RMU operation will be terminated. User Action: Determine if the database is opened (possibly on another cluster node) or being accessed by another RMU user, and then free up the database or reissue the RMU command when the database is offline.
40.1270 – WAITUNFRZ
waiting for unfreeze to ready logical area <num> Explanation: A database freeze or a cluster failover is (or was) in effect. An attempt will be made to wait for the appropriate locks. This situation was discovered because an attempt to ready the specified logical area without waiting failed. User Action: This message could indicate that the verification process is waiting on a logical area lock that some other process is holding. You can determine this by looking at the stall statistics. If this is the case, you can decide to abort the verification process or the process holding the conflicting lock, or let the verification wait until the other user holding the logical area lock completes its transaction.
40.1271 – WASBOOL
expression in CONTAINS or MATCHES was a Boolean Explanation: The Boolean evaluator was processing an expression or sub-expression of the form "A CONTAINS B" or "A MATCHES B". Either the "A" or the "B" expression was a Boolean of the form "NOT X", "X EQ Y", "X NE Y", "X LT Y", "X GT Y", "X LE Y", "X GE Y", "X CONTAINS Y", or "X MATCHES Y". Neither side of CONTAINS and MATCHES expressions are allowed to be Boolean expressions. User Action: Rewrite the expression to have the proper format.
40.1272 – WIDTHRANGE
WIDTH was specified with a value outside the range 60..512 Explanation: The Option=Width:N qualifier allows values in the range 60 to 512. User Action: Reexecute the command with a value in the allowed range.
40.1273 – WORMCANT
Qualifier not valid for WORM devices "<str>". Explanation: A qualifier was specified that cannot be used for WORM devices. User Action: Do not use this qualifier, or make sure the device is not a WORM device.
40.1274 – WRITEBLOCK
error writing block <num> of <str> Explanation: Media or device error detected while writing the backup file. User Action: Consider repeating the operation with different media and/or different tape drives.
40.1275 – WRITEERR
error writing <str> Explanation: Media error was detected while writing the backup file. User Action: None.
40.1276 – WRITERRS
excessive error rate writing <str> Explanation: An excessively large number of write errors were encountered on this tape volume. User Action: Check for media and or drive maintenance problems.
40.1277 – WRMBADEOF
WORM storage area <str> has a bad logical end-of-file. It should be greater than or equal to the last initialized page. Logical end-of-file is <num>. Last initialized page is <num>. Explanation: A WORM storage area was found to have its last initialized page greater than its logical end-of-file. This can never be true. The last initialized page must be less than or equal to the logical end-of-file. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement, and verify the database again.
40.1278 – WRMCANTREAD
error reading WORM pages <num>:<num>-<num> Explanation: An error occurred when you attempted to read one or more WORM pages. The message indicates the storage area ID number and the page numbers of the first and last pages being read. User Action: Examine the associated messages to determine the reason for failure. One possible cause for this error is disabling logging for the WORM area and subsequently restoring that area from an earlier backup.
40.1279 – WRMDEVFUL
WORM device full for area <str> Explanation: The area is marked as "WORM device full," because a previous attempt to extend this WORM area failed. User Action: Consider adding more areas to the storage map or moving the WORM area to a higher capacity WORM device.
40.1280 – WRMFLDST
Searching field <str> in relation <str>. Explanation: The RMU Repair command with the Worm_Segments qualifier is beginning to look in the specified field in the specified relation for segmented strings that are missing.
40.1281 – WRMISUNIF
<str> is a WORM storage area and cannot be a uniform-format area. Explanation: The WORM flag is set indicating that this is a WORM storage area but the MIXED_FMT_FLG flag is not set; this setting contradicts the previous flag, because WORM areas must be mixed-format. Verification of the FILID continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1282 – WRMNOTMIX
cannot add WORM attribute to uniform format area <str> Explanation: WORM areas should be mixed-format areas only. User Action: Examine your command line for illegal combinations.
40.1283 – WRMNOTRWR
<str> is a WORM storage area and cannot be READ-ONLY. Explanation: The WORM flag is set indicating that this is a WORM storage area, but the READ_ONLY flag is set; this setting contradicts the previous flag , because WORM areas cannot be READ-ONLY. Verification of the FILID continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1284 – WRMRDONLY
cannot add WORM attribute to READ_ONLY area <str> Explanation: WORM areas cannot also have the read-only property. User Action: Examine your command line for illegal combinations.
40.1285 – WRMRELEND
Finished search for segmented strings in relation <str>. Explanation: The RMU Repair command with the Worm_Segments qualifier has finished the search for missing segmented string fields in the specified relation
40.1286 – WRMRELST
Starting search for segmented strings in relation <str>. Explanation: The RMU Repair command with the Worm_Segments qualifier is beginning to look in the specified relation for segmented strings that are missing.
40.1287 – WRMSPMENA
<str> is a WORM storage area and cannot have SPAMs enabled. Explanation: The WORM flag is set indicating that this is a WORM storage area, but the SPAMS_DISABLED flag is not set, contradicting the previous flag, because WORM areas must have SPAMS disabled. Verification of the FILID continues. User Action: Correct the error with the RMU Restore command or the SQL IMPORT statement and verify the database again.
40.1288 – WRNGDBBTYP
<str> <str> Explanation: The specified backup file is not of the correct type.
40.1289 – WRNGDBROOT
This backup file cannot be used with the specified database Explanation: When a by-area restore operation or an incremental restore operation is performed on an existing database, the database must either be the original database that was backed up, or it must be recreated from a full and complete backup of the original database. User Action: Perform the operation again, specifying the correct database.
40.1290 – WRNGINCAREA
wrong incremental backup. Last full backup of <str> was at <time>, not <time> Explanation: This incremental backup file was created against a full backup of the area that is different from the one that restored the area. User Action: Either this incremental backup file is out of date, or you must restore the full backup that this incremental backup was generated against.
40.1291 – WRNGINCBCK
wrong incremental backup. Last full backup was at <time>, not <time> Explanation: This incremental backup file was created against a different full backup of the database than the one that restored the database. User Action: Either this incremental backup file is out of date or you must restore the full backup that this incremental backup was generated against.
40.1292 – WRONGPLANTYPE
The specified plan type is unsupported for this operation. Explanation: A plan was supplied that was inappropriate for the current operation. (For example, a backup plan was supplied for a load operation). User Action: Modify the plan type to match the current operation.
40.1293 – WRONGVOL
<str> is not the next volume in the set Explanation: The wrong relative volume was mounted on the drive. User Action: The volumes must be mounted in the order requested.
40.1294 – XFERDEF
A Replication Option for Rdb transfer is defined. Requested operation is prohibited. Explanation: A Replication Option for Rdb transfer is currently defined. The requested operation would cause this transfer to fail; consequently, the operation is prohibited. User Action: Delete any transfers before repeating the request.
40.1295 – XIDNOTFOUND
specified XID could not be found in the database Explanation: The specified XID was not found in the database. Either the XID was never in the unresolved state, or it has since been resolved with the resolution that had been supplied earlier. User Action: Check the appropriate transaction manager log for more information.
40.1296 – XPR_STACK_OVFL
expression forces too many levels of recursion/stack overflow Explanation: You provided an expression which forces too many levels of recursion, which resulted in stack overflow. User Action: The expression should be rewritten to use parentheses and therefore cause fewer levels of recursion. It may also be possible to increase the size of the stack.
40.1297 – XVERREST
Cross version RESTORE is not possible for by-area or incremental functions Explanation: A by-area restore operation is not permitted from a backup created under an earlier software version. This function requires recovery from the after image journal file, and that is not possible across versions. An incremental restore operation is not permitted from a backup created under an earlier software version. The full restore operation that preceded this request performed an implicit convert operation. This updates the database and invalidates the incremental restore operation. User Action: If this operation can not be avoided, the recovery must be performed under the version that the backup was created. That usually will require a full restore operation, and possibly an incremental restore operation and a recover operation. Updates made under the current software version will be lost.
41 – COSI_ERRORS
41.1 – ABKEYW
ambiguous qualifier or keyword - supply more characters Explanation: Too few characters were used to truncate a keyword or qualifier name to make the keyword or qualifier name unique. User Action: Reenter the command. Specify at least four characters of the keyword or qualifier name.
41.2 – ABNEXIT
process has exited abnormally Explanation: The process that is being looked at has exited with some exit status. User Action: Check to see why the process exited.
41.3 – ABORT
abort Explanation: A database procedure has unexpectedly aborted or returned an error in an unexpected way. User Action: Retry the operation. If the error persists, contact your Oracle support representative for assistance.
41.4 – ABSENT
entity or value absent from command string Explanation: This message indicates that the entity value has not been specified in the command line. User Action: No action is required.
41.5 – ABSTIMREQ
absolute time required - delta time supplied Explanation: A delta time is passed to the routine when an absolute time is required. User Action: Contact your Oracle support representative for assistance.
41.6 – ABVERB
ambiguous command verb - supply more characters Explanation: Too few characters were used to truncate a command name to make the command name unique. User Action: Reenter the command. Specify at least four characters of the command name.
41.7 – ACCESS_ERR
an error was returned by the access function Explanation: An error was returned by the 'access' function. User Action: Please refer to the reference pages for 'access' to interpret the meaning of the errno returned by this function.
41.8 – ACCVIO
access violation Explanation: A readable parameter cannot be read, or a writeable parameter cannot be written. User Action: Pass the appropriate parameters.
41.9 – ACLEMPTY
access control list is empty Explanation: There are no access control entries in the access control list. User Action: Do not perform this command when the access control list is empty.
41.10 – AMBDATTIM
ambiguous date-time Explanation: The date-time input string did not match the specified input format. User Action: Correct the date-time input string and input format for date and time values.
41.11 – ARITH
arithmetic exception Explanation: An 'arithmetic exception' was raised as an exception in response to the arithmetic or software condition specified by the given code. Integer overflow, Integer divide by 0, Floating overflow, Floating divide by 0, Floating underflow, Floating invalid operation, and Floating inexact result are specific conditions producing this signal. User Action: Eliminate the cause of the error condition and retry the operation.
41.12 – AUTHNOTOWN
the authentication file is not owned by root Explanation: The authentication file has the wrong owner. User Action: Contact the system or database administrator to verify and/or correct the owner of the /usr/lib/dbs/sql/version /lib/cosi_authenticate program.
41.13 – AUTHWRONGPROT
the authentication file does not have the correct protection set Explanation: The authentication file has the wrong protection. User Action: Contact the system or database administrator to verify and/or correct the protection of the /usr/lib/dbs/sql/version /lib/cosi_authenticate program.
41.14 – BADAUTH
there is a problem with the authentication file Explanation: The authentication file does not exist, or it has the wrong owner or protection. User Action: Contact the system or database administrator to verify and/or correct the existence, owner, or protection of the /usr/lib/dbs/sql/version /lib/cosi_authenticate program.
41.15 – BADNODENAME
invalid remote node name specification for network Explanation: You specified a remote node name that contains a quoted string when using TCP/IP transport. User Action: Either check the remote node name for validity, and remove any quoted strings, or change the transport to use DECnet as the network transport.
41.16 – BADPARAM
bad parameter value Explanation: A value specified for a system function is not valid. User Action: This error message indicates a possible error in the use of operating system services by the database system. Contact your Oracle support representative for assistance.
41.17 – BADVMSVER
operating system version is not supported Explanation: The current version of Operating System is higher than the version supported by Oracle Rdb. User Action: Contact your Customer service center. There may be a future version of Oracle Rdb that supports your Operating System version.
41.18 – BAD_CODE
corruption in the query string Explanation: An illegal entry was found in the query string. User Action: If you called Rdb, check the query string. Otherwise, contact your Oracle support representative for assistance.
41.19 – BAD_KEY
invalid key specification Explanation: The key field size, position, data type, or order is incorrect within the key definition. Positions start at 1 and cannot be greater than the maximum record size. Record size must be less than or equal to 32,767 for character data; 1, 2, 4, 8, or 16 for binary data; and less than or equal to 31 for decimal. User Action: Check the command string key specifiers.
41.20 – BAD_LRL
record length (<num>) greater than specified longest record length Explanation: While reading the input file, the sort code encountered a record longer than the specified LRL (longest record length specified in bytes). The record will be truncated to the LRL and sorted. User Action: Reexecute the sort command with a longer LRL.
41.21 – BAD_MERGE
number of work files must be between 0 and 10 Explanation: The number of work files used was either less than 0 or greater than 10. User Action: Specify the correct number of work files, and retry the sort/merge operation.
41.22 – BAD_SRL
record length (<num>) is too short to contain keys Explanation: A record passed to sort/merge is too short to contain all the keys. The record is discarded, and processing continues. User Action: Check your input records and key specification.
41.23 – BAD_TYPE
invalid sort process specified Explanation: One of the following sort type codes was passed to the routine package: - Less than one or greater than four if file I/O - Not equal to one if record I/O - An invalid key word in command PROCESS Legal values are 1 through 4 for file I/O; 0 for record I/O; and RECORD, TAG, INDEX, or ADDRESS for the PROCESS command parameter. User Action: Specify a different sorting process.
41.24 – BUFFEROVF
output buffer overflow Explanation: The service completed successfully. The buffer length field in an item descriptor specified an insufficient value. The buffer was not large enough to hold the requested data. User Action: Provide a larger buffer, or perform another read request to access the remainder of the message.
41.25 – BUGCHECK
internal consistency failure Explanation: A fatal, unexpected error was detected by the database management system. User Action: Contact your Oracle support representative for assistance. You will need to include any bugcheck dump files.
41.26 – CANCEL
operation canceled Explanation: The lock conversion request has been canceled, and the lock has been granted at its previous lock mode. This condition value is returned under the following conditions: a lock request results in queueing a lock conversion request, the request has not been granted yet (it is in the conversion queue), and, in the interim, the unlock service is called to cancel this lock conversion. Note that if the waiting conversion request is granted BEFORE the unlock call can cancel the conversion, the unlock call returns CANCELGRANT, and the conversion request returns NORMAL. User Action: This error message indicates a possible error in the locking protocols. Contact your Oracle support representative for assistance.
41.27 – CANCELGRANT
cannot cancel a granted lock Explanation: The service to unlock a lock was called to cancel a pending lock request; however, before the request to unlock could be completed, the lock was granted. User Action: This error message indicates a possible error in the locking protocols. Contact your Oracle support representative for assistance.
41.28 – CANNOT_OPEN_LIB
cannot open help library Explanation: The help utility could not open the requested help library. User Action: Contact your Oracle support representative for assistance.
41.29 – CANTASSMBX
error assigning a channel to a mailbox Explanation: An error occurred during an attempt to assign a channel to a VMS mailbox. User Action: Examine the secondary messages to determine the reason for the failure.
41.30 – CANTCREMBX
cannot create mailbox Explanation: An error occurred during an attempt to create a mailbox. Mailboxes are used for interprocess communication by the database management system on VMS. User Action: Examine the secondary messages to determine the reason for the failure. Usually, you will have to change one of your quotas (most likely, the buffered I/O byte count quota or the open file quota).
41.31 – CANTSPAWN
error creating sub-process Explanation: An error occurred during an attempt to spawn a sub-process. User Action: Examine the secondary messages to determine the reason for the failure.
41.32 – CAPTIVEACCT
captive account - can't create sub-process Explanation: An error occurred during an attempt to enter a SPAWN command from a captive account on VMS. User Action: Do not use the SPAWN command from a captive account.
41.33 – CHOWN_ERR
an error was returned by the chown function Explanation: An error was returned by the 'chown' function. User Action: Please refer to the reference pages for 'chown' to interpret the meaning of the errno returned by this function.
41.34 – CLOSEERR
cannot close file Explanation: An error occurred during an attempt to close the specified file. User Action: Examine the secondary messages to determine the reason for the failure.
41.35 – COMMA
requested value is terminated by a comma Explanation: The returned value is terminated by a comma, implying that additional values are in the list. User Action: No action is required.
41.36 – CONCAT
requested value is terminated by a plus sign Explanation: The returned value is concatenated to the next value with a plus sign, implying that additional values are in the list. User Action: No action is required.
41.37 – CONFLICT
illegal combination of command elements - check documentation Explanation: Two or more qualifiers, keywords, or parameters that cannot be used in combination were used in the same command line. User Action: Remove the offending element.
41.38 – CONNECFAIL
connect over network timed-out or failed Explanation: An error occurred during an attempt to establish a network connection. User Action: Examine the secondary messages to determine the reason for the failure.
41.39 – COSI_LAST_CHANCE
In cosi_last_chance handler in image <str>.Unhandled exception code was 0x<num>.Exception occurred at <str> = 0x<num>!. Explanation: An exception occurred which was not handled by another exception handler. Cosi exits the process. User Action: Check the message information and the bugheck dump, if one was written, to determine the cause of the error. If necessary, contact your Oracle support representative for assistance.
41.40 – CREATED
file/section did not exist - was created Explanation: The service completed successfully. The specified global section did not previously exist and has been created. User Action: No action is required.
41.41 – CREATEFILEMAPPINGERR
Error encountered while creating a filemapping object Explanation: A system error was encountered while creating a filemapping object User Action: Examine the secondary message to determine the reason for the failure.
41.42 – CREATEMUTEXERR
Error encountered while creating mutex Explanation: A system error was encountered while creating a mutex object User Action: Examine the secondary message to determine the reason for the failure.
41.43 – CREATERR
cannot create file Explanation: An error occurred during an attempt to close the specified file. User Action: Examine the secondary messages to determine the reason for the failure.
41.44 – CREAT_ERR
an error was returned by the creat function Explanation: An error was returned by the 'creat' function. User Action: Please refer to the reference pages for 'creat' to interpret the meaning of the errno returned by this function.
41.45 – CSETNOTFOUND
invalid or undefined character set Explanation: The character set specified is not known. User Action: Check the character set name and/or identifier to ensure that they are valid.
41.46 – CVTASSTS
Status from CVTAS routine is <num> Explanation: Actual status value from convert operation. User Action: Examine the program counter (PC) location displayed in the message. Check the program listing to verify that operands or variables are specified correctly.
41.47 – CVTUNGRANT
cannot convert an ungranted lock Explanation: An attempt was made to convert a lock that is not granted. (Only locks that are granted can be converted to another lock mode.) User Action: This error message indicates a possible error in the locking protocols. Contact your Oracle support representative for assistance.
41.48 – DEADLOCK
deadlock detected Explanation: The system detected a set of processes waiting for locks in a circular fashion (deadlock). This lock request is denied in order to break the deadlock. User Action: Recovery from a deadlock is application specific. One common action is for the program to rollback the transaction and retry. Deadlocks should be infrequent in properly designed applications.
41.49 – DECOVF
decimal overflow Explanation: During an arithmetic operation, a decimal value exceeds the largest representable decimal number. The result of the operation is set to the correctly signed least significant digit. User Action: No action is required.
41.50 – DEFAULTED
entity present by default in command string Explanation: The specified entity is not explicitly present in the command line, but is present by default. User Action: No action is required.
41.51 – DEFFORUSE
default format used - could not determine user's preference Explanation: Translation of RDB_DT_FORMAT failed, and the native standard representation is used. User Action: Examine and make corrections to the environment variable RDB_DT_FORMAT.
41.52 – DELETERR
error deleting file Explanation: An error occurred during an attempt to delete the specified file. User Action: Examine the secondary messages to determine the reason for the failure. You may have to change the protection on a file before you can delete it.
41.53 – DESSTROVF
destination string overflow Explanation: The destionation string is shorter than the source string when the string is copied during datatype conversion. User Action: Examine the length of the destination string and make sure the length is at least equal to or longer than the source string.
41.54 – DIFSIZ
different message size received from expected Explanation: The message size set by VMS system when message is put in the buffer and the size computed by the monitor (message header + the message length) are different. This can be caused by a multi- user environment, e.g. 5.1 image tries to attach to 6.1 monitor. User Action: Rollback/Abort
41.55 – DIRECT
invalid directory syntax - check brackets and other delimiters Explanation: The directory name in a file specification entered for a command contains an illegal character, or a command that expects a directory name string did not find a directory delimiter in the specified parameter. User Action: Examine the directory name, correct the directory syntax, and reenter the command.
41.56 – DIRNAME_ERR
an error was returned by the dirname function Explanation: An error was returned by the 'dirname' function. User Action: Please refer to the reference pages for 'dirname' to interpret the meaning of the errno returned by this function.
41.57 – DNF
directory not found Explanation: The specified directory does not exist on the specified device. User Action: Verify that the device and/or directory are specified correctly. Create the directory if necessary, or specify an existing directory.
41.58 – DT_FRACMBZ
fractional seconds precision must be zero for this sub-type Explanation: The date/time data type supplied a sub-type which does not allow a fractional seconds precision to be supplied. User Action: This appears to be an error in the code generated by SQL. Contact your Oracle support representative for assistance. You will need to provide the statement that produced this error.
41.59 – DT_PRECMBZ
interval leading field precision must be zero for this sub-type Explanation: The date/time data type supplied a sub-type which does not allow a leading interval field precision to be supplied. User Action: This appears to be an error in the code generated by SQL. Contact your Oracle support representative for assistance. You will need to provide the statement that produced this error.
41.60 – DUP_OUTPUT
output file has already been specified Explanation: The output file argument in the routine package was passed more than once. User Action: Check the file arguments, and verify that the output file was specified only once.
41.61 – EF_ALRFRE
event flag already free Explanation: An event flag that is already free is being freed. User Action: Contact your Oracle support representative for assistance.
41.62 – EF_RESSYS
event flag reserved to system Explanation: The specified event flag is reserved for system use. User Action: Contact your Oracle support representative for assistance.
41.63 – ENDDIAGS
completed with diagnostics Explanation: The operation completed with diagnostics. User Action: Correct the conditions that resulted in the diagnostics, and retry the operation.
41.64 – ENDOFFILE
end of file Explanation: An end-of-file condition was encountered during an I/O operation. User Action: Examine the secondary messages to determine the reason for the failure.
41.65 – ENGLUSED
English used - could not determine user's language Explanation: Translation of the environment variable RDB_LANGUAGE failed. English is being used. User Action: Examine RDB_LANGUAGE. Verify that the environment variable is defined.
41.66 – EOFERR
error determining size of file Explanation: An error occurred when an attempt was made to determine the size of the specified file. User Action: Examine the secondary messages to determine the reason for the failure.
41.67 – ERRFOREIGN
error opening foreign command file as input Explanation: An error occurred when a foreign (indirect) command file was read. User Action: Examine the secondary message for more information.
41.68 – EVLWRTERR
error writing to the Event Log Explanation: This general message indicates an error during a write to the NT Event Log. User Action: Examine the secondary messages to determine the reason for the failure. Make sure that the EventLog service is running and that the Event Log is not full.
41.69 – EXAIOQUOTA
exceeded aio quota Explanation: The program could not proceed because the AIO quota on the system has been exceeded. User Action: Either increase the AIO quota on the system or reduce the AIO quota consumption by Oracle Rdb servers.
41.70 – EXASTQUOTA
exceeded AST quota Explanation: The program could not proceed because the AST quota for the Rdb system has been exceeded. User Action: Increase the maximum number of users allowed on the system by modifying the RDBLCK_MAX_USERS parameter in the rdblck.conf configuration file.
41.71 – EXC_OSF
exception sub facility <num>, code <num>, <num> param(s) (17XA) Explanation: The unrecognized exception with the given sub facilkity number, code and optional parameters was raised in response to an arithmetic or software condition. User Action: Eliminate the cause of the error condition and retry the operation.
41.72 – EXC_OSF0
exception Internal, code <num>, <num> param(s) (17XA) Explanation: An 'Internal' exception with the given code and optional parameters was raised in response to an arithmetic or software condition. User Action: Eliminate the cause of the error condition and retry the operation.
41.73 – EXC_OSF1
exception Facility End, code <num>, <num> param(s) (17XA) Explanation: A 'Facility End' exception with the given code and optional parameters was raised in response to an arithmetic or software condition. User Action: Eliminate the cause of the error condition and retry the operation.
41.74 – EXC_OSF2
exception All, code <num>, <num> param(s) (17XA) Explanation: A 'Wildcard' exception with the given code and optional parameters was raised in response to an arithmetic or software condition. User Action: Eliminate the cause of the error condition and retry the operation.
41.75 – EXC_OSF4
exception ADA User, code <num>, <num> param(s) (17XA) Explanation: An 'ADA user' exception with the given code and optional parameters was raised in response to an arithmetic or software condition. User Action: Eliminate the cause of the error condition and retry the operation.
41.76 – EXC_OSF5
exception PL1 User, code <num>, <num> param(s) (17XA) Explanation: A 'Pl1 user' exception with the given code and optional parameters was raised in response to an arithmetic or software condition. User Action: Eliminate the cause of the error condition and retry the operation.
41.77 – EXC_OSF6
exception C++ User, code <num>, <num> param(s) (17XA) Explanation: A 'C++ user' exception with the given code and optional parameters was raised in response to an arithmetic or software condition. User Action: Eliminate the cause of the error condition and retry the operation.
41.78 – EXC_OSF7
exception C++ User Exit, code <num>, <num> param(s) (17XA) Explanation: A 'C++ Exit Path' exception with the given code and optional parameters was raised in response to an arithmetic or software condition. User Action: Eliminate the cause of the error condition and retry the operation.
41.79 – EXC_OSF8
exception C++ User Other, code <num>, <num> param(s) (17XA) Explanation: A 'C++ Other' exception with the given code and optional parameters was raised in response to an arithmetic or software condition. User Action: Eliminate the cause of the error condition and retry the operation.
41.80 – EXC_OSF9
exception C User, code <num>, <num> param(s) (17XA) Explanation: An 'C User' exception with the given code and optional parameters was raised in response to an arithmetic or software condition. User Action: Eliminate the cause of the error condition and retry the operation. BASEs 3811-3849 Reserved for future OSF exception types
41.81 – EXDEPTH
exceeded allowed depth Explanation: Either a programming error has occurred or the resource name tree does not have enough depth. The lock management services allow a certain depth to the resource name tree. User Action: This error message indicates a possible error in the locking protocols. Contact your Oracle support representative for assistance.
41.82 – EXENQLM
exceeded enqueue quota Explanation: The process's ENQLM quota was exceeded. User Action: Increase the ENQLM quota, and retry the operation.
41.83 – EXGBLPAGFIL
exceeded global page file limit Explanation: The attempt to allocate a global section with a page file backing store failed because the systemwide limit on these pages is exceeded. User Action: Delete some similar sections or ask the system manager to increase the SYSGEN parameter GBLPAGFIL. Then, try the operation again.
41.84 – EXLOCKQUOTA
exceeded RDBLCK_LOCK_COUNT quota Explanation: The program could not proceed because the RDBLCK_LOCK_COUNT quota for the Oracle Rdb system has been exceeded. User Action: Increase the lock count on the system by modifying the RDBLCK_LOCK_COUNT parameter in the rdblck.conf configuration file.
41.85 – EXPROCESSQUOTA
exceeded RDBLCK_MAX_PROCESS quota Explanation: The program could not proceed because the RDBLCK_MAX_PROCESS quota for the Oracle Rdb system has been exceeded. User Action: Increase the process count on the system by modifying the RDBLCK_MAX_PROCESS parameter in the rdblck.conf configuration file.
41.86 – EXQUOTA
exceeded quota Explanation: The program could not proceed because a resource quota or limit had been exceeded. User Action: The secondary error message describes the resource that was exceeded. If this occurs consistently, increase the appropriate quota.
41.87 – EXRESOURCEQUOTA
exceeded RDBLCK_RESOURCE_COUNT quota Explanation: The program could not proceed because the RDBLCK_RESOURCE_COUNT quota for the Oracle Rdb system has been exceeded. User Action: Increase the resource count on the system by modifying the RDBLCK_RESOURCE_COUNT parameter in the rdblck.conf configuration file.
41.88 – EXTENDERR
error extending file Explanation: An error occurred when the size of the specified file was extended. User Action: Examine the secondary messages to determine the reason for the failure.
41.89 – EXUSERSQUOTA
exceeded RDBLCK_MAX_USERS quota Explanation: The program could not proceed because the RDBLCK_MAX_USERS quota for the Oracle Rdb system has been exceeded. User Action: Increase the maximum number of users allowed on the system by modifying the RDBLCK_MAX_USERS parameter in the rdblck.conf configuration file.
41.90 – FATINTERR
fatal internal error Explanation: An unexpected internal error has occurred. User Action: Contact your Oracle support representative for assistance.
41.91 – FCSFOP
file already open Explanation: A file was already opened when it was not expected to be. This indicates a general logic error in the system database code. User Action: Retry the operation. If the error persists, contact your Oracle support representative for assistance.
41.92 – FILACCERR
error <str> file <str> Explanation: This general message indicates an error during file access. User Action: Examine the secondary messages to determine the reason for the failure.
41.93 – FILEMAPPINGEXISTS
mapping object specified already exists Explanation: The file mapping object specified by a name while creating it, already exists. User Action: Change the name of the mapping object to be unique.
41.94 – FILESYN
syntax error parsing file Explanation: This general message indicates an error that occurs during parsing of a file name (local or remote). User Action: Examine the secondary messages to determine the reason for the failure.
41.95 – FILOPENERR
error opening file <str> Explanation: An error occurred during an attempt to open the indicated file. User Action: Check the attributes and protection of the relevant file and of the associated directories. Verify that the file exists at the time it is needed.
41.96 – FILREADERR
error reading file <str> Explanation: An error occurred during an attempt to read the indicated file. User Action: Check the attributes and protection of the relevant file and of the associated directories.
41.97 – FILWRITEERR
error writing file <str> Explanation: An error occurred during an attempt to write to the indicated file. User Action: Check the attributes and protection of the relevant file and of the associated directories. Verify that sufficient disk space is available at the time that the file is written to.
41.98 – FLK
file currently locked by another user Explanation: An attempt to open or create a file failed. Another user has the file open in a mode incompatible with the attempted access. User Action: Wait until the other user has unlocked the file. If the file cannot be shared, modify the program to detect and respond to this condition by waiting.
41.99 – FLTDENORM
conversion to IEEE floating is denormalized Explanation: Conversion to IEEE float is a denormalized value. User Action: You can force a denormalized IEEE output value to zero by passing a CVT_FORCE_DENORM_TO_ZERO parameter to the cosi_cvt_ftof routine.
41.100 – FLTDIV
floating point division exception Explanation: An arithmetic exception condition occurred as a result of a floating point division operation. User Action: Modify the query to prevent a possible divide-by-zero operation from occurring.
41.101 – FLTEXC
floating point exception Explanation: An arithmetic exception condition occurred as a result of a floating point operation. This may result from a divide-by-zero operation or a floating point overflow. User Action: Modify the query to prevent a possible floating point divide-by-zero or overflow from occurring.
41.102 – FLTINF
conversion to IEEE floating is infinite Explanation: Conversion to IEEE float is an infinite value. User Action: You can force IEEE infinite to max float value by passing a CVT_FORCE_INF_TO_MAX_FLOAT parameter to the cosi_cvt_ftof routine. This will force a positive IEEE infinite output value to +max_float and force a negative IEEE infinite output value to -max_float.
41.103 – FLTINV
invalid floating conversion Explanation: The float conversion result is either ROP (Reserved Operand), NaN (Not a Number), or the closest equivalent. User Action: Make sure the float conversion routine cosi_cvt_ftof is invoked with the proper range of floating value.
41.104 – FLTINX
Float Inexact Result Explanation: An arithmetic exception condition occurred as a result of a floating point inexact result. User Action: Examine the program counter (PC) location displayed in the message. Check the program listing to verify that operands or variables are specified correctly.
41.105 – FLTNAN
Float Not a Number Explanation: An arithmetic exception condition occurred as a result of a floating Not-a-Number (NaN) value. User Action: Examine the program counter (PC) location displayed in the message. Check the program listing to verify that operands or variables are specified correctly.
41.106 – FLTOVF
floating overflow Explanation: An arithmetic exception condition occurred as a result of a floating point overflow. User Action: Examine the program counter (PC) location displayed in the message. Check the program listing to verify that operands or variables are specified correctly.
41.107 – FLTUND
floating underflow Explanation: An arithmetic exception condition occurred as a result of a floating point underflow. User Action: Examine the program counter (PC) location displayed in the message. Check the program listing to verify that operands or variables are specified correctly.
41.108 – FNF
file not found Explanation: The specified file does not exist. User Action: Check the file specification and verify that the directory, file name, and file type were all specified correctly.
41.109 – FORMAT
invalid or corrupt media format Explanation: The media or disk file is in an invalid format. User Action: This message indicates a media or hardware problem. Check the system error log and consult the hardware support group for further information. Reinstalling Oracle Rdb will resolve the problem. However, if the error regularly occurs, faulty hardware is almost certainly the cause.
41.110 – FORMATERR
error in formatting output Explanation: An error occurred during formatting of output to a terminal or a file. User Action: Examine the secondary messages to determine the reason for the failure.
41.111 – GETPWNAM_ERR
an error was returned by the getpwnam function Explanation: An error was returned by the 'getpwnam' function. User Action: Please refer to the reference pages for 'getpwnam' to interpret the meaning of the errno returned by this function.
41.112 – ILLCOMPONENT
illegal initialization component Explanation: The format for the data and time formatting string for one of the fields is illegal. User Action: Check your data and time format string to be used for displaying date and time strings.
41.113 – ILLEFC
illegal event flag cluster Explanation: An event flag number specified in a system service call is greater than 127. User Action: Contact your Oracle support representative for assistance.
41.114 – ILLFORMAT
illegal format - too many or not enough fields Explanation: An invalid date and time format string was given. User Action: Examine the date/time format string, and correct it.
41.115 – ILLINISTR
illegal initialization string Explanation: An incorrect initialization string is passed to date and time formatting services. User Action: Verify that the initialization string begins and ends with the same delimiter character.
41.116 – ILLSTRCLA
illegal string class Explanation: The class code found in the class field of a descriptor is not a supported string class code. User Action: Contact your Oracle support representative for assistance.
41.117 – ILLSTRPOS
illegal string position Explanation: The service completed successfully. However, one of the character-position parameters to a string manipulation routine pointed to a character-position that is before the beginning of the input string or that is after the end of the input string. User Action: Ensure that any character-position parameter is greater than zero and less than or equal to the length of the input string.
41.118 – ILLSTRSPE
illegal string specification Explanation: The service completed successfully, except that the character-position parameters specifying a substring of a string parameter were inconsistent because the ending character position is less than the starting character position. A null string is used for the substring. User Action: The application program should verify that the starting character positions are less than or equal to the ending character positions.
41.119 – IMGABORTED
image aborted at privileged user request Explanation: The current image was aborted by another privileged user, typically the database administrator, in response to some event that required this action. User Action: Consult the database administrator to identify the reason the image was aborted.
41.120 – INCDATTIM
incomplete date-time - missing fields with no defaults Explanation: An incomplete date or time parameter was supplied. User Action: Examine the date/time value and the data and time format string. Correct the input data/ time value.
41.121 – INPCONERR
input conversion error Explanation: There is an invalid character in the input string; or the output value is not within the range of the destination data type. User Action: Correct the input string, or change the destination data type.
41.122 – INP_FILES
too many input files specified Explanation: More than 10 input files were listed. User Action: Reduce the number of input files or combine them so that no more than 10 input files are listed.
41.123 – INSEF
insufficient event flags Explanation: There were insufficient event flags. There were no more event flags available for allocation. User Action: Contact your Oracle support representative for assistance.
41.124 – INSFARG
insufficient call arguments Explanation: An internal coding error (insufficient number of arguments) occurred. User Action: Contact your Oracle support representative for assistance.
41.125 – INSFMEM
insufficient dynamic memory Explanation: A command or image exhausted the system pool of dynamic memory, and the system cannot complete the request. User Action: Free the resources you are holding, or increase the existing pool of memory.
41.126 – INSFPRM
missing command parameters - supply all required parameters Explanation: A command cannot be executed because one or more required parameters are missing from the command. User Action: Correct the command by supplying all required parameters.
41.127 – INSFSYSRES
insufficient system resources Explanation: The operating system did not have sufficient resources to process the request. User Action: Examine secondary error message for more information.
41.128 – INTOVF
integer overflow Explanation: Either the library routine or the hardware detected an integer overflow. User Action: Choose a destination data type with a larger range.
41.129 – INT_OVERFLOW
Arithmetic exception: Integer Overflow Explanation: Attempt to use a number which was too great in magnitude to be used in the manner intended. User Action: Review the problem and change the data or data format accordingly.
41.130 – INVARG
invalid argument(s) Explanation: An invalid argument is specified to an internal call. User Action: Contact your Oracle support representative for assistance.
41.131 – INVARGORD
invalid argument order Explanation: The ordering of the arguments is invalid. The caller specified the the data and time values in the wrong order. User Action: Contact your Oracle support representative for assistance.
41.132 – INVCLADSC
invalid class in descriptor Explanation: An unsupported class of descriptor is specified. User Action: Retry the operation specifying a supported class of descriptor.
41.133 – INVCLADTY
invalid class data type combination in descriptor Explanation: An unsupported class and data type of descriptor is specified. User Action: Ensure that both the class and data type specified are supported.
41.134 – INVCVT
invalid data type conversion Explanation: 1. source value is negative and destination data type is unsigned, or 2. possible bad parameter, such as invalid input/output type, invalid option value, or 3. float over/under flow (on ALPHA/VMS), or 4. positive/Negative Infinity (on ALPHA/VMS), or 5. reserved operand error. User Action: Ensure that the source value is positive and the destination data type is signed, or the parameters are passed correctly, or contact your Oracle support representative for assistance.
41.135 – INVDTYDSC
invalid data type in descriptor Explanation: An unsupported data type is specified. User Action: Retry the operation specifying a supported data type.
41.136 – INVENTITY
invalid command line entity specified Explanation: An invalid qualifier or parameter was found on the command line. User Action: Reenter the command line using the correct syntax.
41.137 – INVFILNAM
invalid character found in file name Explanation: A non-ASCII character was found in a file name used in a command. User Action: Examine the file name, correct the name, and reenter the command.
41.138 – INVKEY
invalid keyword Explanation: There was an unrecognized keyword in the command string. User Action: Reenter the command using the correct syntax.
41.139 – INVNBDS
invalid numeric byte data string Explanation: There is an invalid character in the input, or the value is outside the range that can be represented by the destination, or the numeric byte date string (NBDS) descriptor is invalid. This error is also signaled when the array size of an NBDS is larger than 65,535 bytes or the array is multi-dimensional. User Action: Specify a valid NBDS.
41.140 – INVREQTYP
invalid request type Explanation: This message is associated with an internal status code returned from the command interpreter result parse routine. The message indicates a request to perform an unimplemented function. User Action: Contact your Oracle support representative for assistance.
41.141 – INVSTRDES
invalid string descriptor Explanation: A string descriptor passed to a general library procedure did not contain a valid CLASS field. User Action: Locate the call to the library that caused the error, and initialize the field to the proper class of descriptor.
41.142 – INVWRKBLK
invalid request type Explanation: This message is associated with an internal status code returned from the command interpreter result parse routine. The message indicates that the parser encountered a corrupt internal data structure. User Action: Contact your Oracle support representative for assistance.
41.143 – INV_PRECISION
invalid interval leading field precision for datetime Explanation: The data definition language requested an interval leading field precision which is outside the supported range. User Action: Correct the definition so that the interval leading field precision is within the legal range.
41.144 – INV_SCALE
invalid fractional seconds precision for datetime Explanation: The data definition language requested a fractional seconds precision for a TIME, TIMESTAMP, or INTERVAL definition which is outside the correct range. User Action: Correct the definition so that the fractional seconds precision is within the legal range.
41.145 – INV_SUB_TYPE
invalid sub_type in definition Explanation: The data definition language requested a sub-type which was not recognized. User Action: Correct the definition so that the sub-type is within the legal range.
41.146 – IO_ERROR
IO error while reading help file Explanation: An IO error occured while reading a help library. User Action: Contact your Oracle support representative for assistance.
41.147 – IVACL
invalid access control list entry syntax Explanation: You have specified syntax for an access control list entry that is not acceptable. User Action: Specify a valid access control list entry syntax. For RMU, this will typically be of the form: (IDENT = identifier, ACCESS = privilege+privilege+...)
41.148 – IVBUFLEN
invalid buffer length Explanation: The length of the buffer supplied was invalid. The length of the resource name provided to the system service to acquire a lock was more than 31 characters, or an I/O message was too large to handle. User Action: This error message indicates a possible error in the use of operating system services. Contact your Oracle support representative for assistance.
41.149 – IVDEVNAM
invalid device name Explanation: A device name contains invalid characters, or no device is specified. User Action: Verify that the device name is specified correctly and is suitable for the requested operation.
41.150 – IVKEYW
unrecognized keyword - check validity and spelling Explanation: There is an unrecognized keyword in the command string. User Action: Reenter the command using the correct syntax.
41.151 – IVLOCKID
invalid lock id Explanation: The lock identification specified in the call to the lock or unlock request is not a valid lock identification for that process. User Action: This error message indicates a possible error in the locking protocols. Contact your Oracle support representative for assistance.
41.152 – IVLOGNAM
invalid logical name Explanation: A name string exceeds the maximum length permitted or has a length of 0. User Action: Check that the character string descriptors pointing to name strings indicate the correct lengths.
41.153 – IVMODE
invalid mode for requested function Explanation: The caller does not have the privilege to perform the operation. User Action: This error message indicates a possible error in the use of operating system primitives. Contact your Oracle support representative for assistance.
41.154 – IVQLOC
invalid qualifier location - place after a parameter Explanation: A qualifier that can be used only to qualify a parameter value in a command is placed following the command name. User Action: Reenter the command. Place the qualifier following the parameter value it qualifies.
41.155 – IVQUAL
unrecognized qualifier - check validity, spelling, and placement Explanation: A qualifier is spelled incorrectly or is improperly placed in the command line. User Action: Correct the command line.
41.156 – IVTILDEUSER
invalid user name found in the file specification Explanation: The user name specified after a ~ (tilde) in the specified file is not a vailid user name, and therefore cannot be used to get to a home directory. User Action: Verify that the user name is specified correctly, and correct any spelling errors.
41.157 – IVTIME
invalid date or time Explanation: A time value specified in a system service call is invalid. Either a delta time is greater than 10,000 days, or a calculated absolute time is less than the system date and time. User Action: Check for a programming error. Verify that the call to the service is coded correctly.
41.158 – IVVERB
unrecognized command verb - check validity and spelling Explanation: The first word on the command line is not a valid command. User Action: Check the spelling of the command name and reenter the command.
41.159 – KEYAMBINC
key specification is ambiguous or inconsistent Explanation: Duplicate key parameters were specified for a single KEY qualifier. User Action: Specify each key parameter only once. For multiple keys, use a KEY qualifier for each key.
41.160 – KEYED
mismatch between sort/merge keys and primary file key Explanation: An empty indexed file was created with a primary key that does not match the sort key. The sort operation is less efficient than it is when the two keys match. User Action: For greater efficiency, create a new indexed file or change the sort key.
41.161 – KEY_LEN
invalid key length, key number <num>, length <num> Explanation: The key size is incorrect for the data type, or the total key size is greater than 32,767. User Action: Specify the correct key field size. Size must be less than or equal to 32,767 for character data; 1, 2, 4, 8, or 16 for binary data; and less than or equal to 31 for decimal. Also, only ascending or descending order is allowed.
41.162 – LIB_NOT_OPEN
help library not open Explanation: An attempt was made to access a help library without opening it first. User Action: Contact your Oracle support representative for assistance.
41.163 – LM_EXCEEDED
licensed product has exceeded current license limits Explanation: The number of active license units has exceeded the current limits. User Action: Reduce the number of active users of the product.
41.164 – LM_ILLPRODUCER
producer argument isn't DEC Explanation: The producer name for the product does not match with the license installed on the system. User Action: Check the license installed on the system.
41.165 – LM_INVALID_DATE
license is invalid for this product date Explanation: The date of the product release does not match with the date of the license installed. User Action: Check the installed license.
41.166 – LM_INVALID_VERS
license is invalid for this product version Explanation: The version of the product does not match with the license on the system. User Action: Check the installed license.
41.167 – LM_NOLICENSE
operation requires software license Explanation: A license for the software does not exist. User Action: Please install the license for the product.
41.168 – LM_NOTINRSRVLIST
not in license reserve list Explanation: This user is not in the list of reserved users (for a user-based license). User Action: Please add the user to the list of reserved users.
41.169 – LM_TERMINATED
license has terminated Explanation: The license for the product has been terminated. User Action: Please renew your license or install a new one.
41.170 – LOADFAILURE
unable to dynamically load or unload image Explanation: The system is unable to dynamically load an executable image. User Action: Ensure that the relevent image is available on your system.
41.171 – LOADINVFILNAM
invalid image filename Explanation: The filename supplied for dynamic image loading was improperly specified or contains illegal characters. User Action: Correct the filename.
41.172 – LOADINVSECFIL
invalid secure image filespec Explanation: The file specification supplied for secure EXEC mode logical translation contains illegal characters or does not reference an existing image. User Action: Ensure that the file specification is correct and that the specified image exists.
41.173 – LOADSYMBOL
unable to look up symbol in dynamically loaded image Explanation: The system is unable to look up a symbol in a dynamically loaded image. User Action: Ensure that the relevent image is available on your system and that the image is intact (not corrupt).
41.174 – LOADUNINSTALLED
unable to dynamically load uninstalled image Explanation: The system is unable to dynamically load a shareable image into the process with the main image which is installed execute_only or privileged. The new image must be installed and any associated filespec must reference only /SYSTEM/EXEC logicals. User Action: Ensure that the relevent image is installed on your system and that the proper logicals are used in the related filespec.
41.175 – LOCNEG
entity explicitly and locally negated in command string Explanation: The specified qualifier is present in its negated form (prefixed with no) and is used as a local qualifier. User Action: No action is required.
41.176 – LOCPRES
entity value is locally present in command string Explanation: The specified qualifier is present and is used as a local qualifier. User Action: No action is required.
41.177 – LRL_MISS
longest record length must be specified Explanation: If a record I/O interface subroutine package is selected, the longest record length (LRL) must be passed to sort in the call. User Action: Specify the LRL.
41.178 – MAPVIEWOFFILEERR
Error encountered while mapping a view of the file Explanation: A system error was encountered while mapping a view of the file into the virtual address space. User Action: Examine the secondary message to determine the reason for the failure.
41.179 – MAXPARM
too many parameters - reenter command with fewer parameters Explanation: A command contained more than the maximum number of parameters allowed. This error can be caused by leaving blanks on a command line where a special character, for example, a comma or plus sign, is required. User Action: Determine the reason for the error, and correct the syntax of the command.
41.180 – MKDIR_ERR
an error was returned by the mkdir function Explanation: An error was returned by the 'mkdir' function. User Action: Please refer to the reference pages for 'mkdir' to interpret the meaning of the errno returned by this function.
41.181 – MODIFYERR
error extending or truncating file Explanation: An error occurred during an attempt to modify the size of the specified file. User Action: Examine the secondary messages to determine the reason for the failure.
41.182 – MSGNOTFND
message not in system message file Explanation: The relevant message was not found in the message file, or the message system was not properly initialized. User Action: Contact your Oracle support representative for assistance.
41.183 – MUTEXEXISTS
mutex object specified already exists Explanation: The mutex object specified while creating already exists. User Action: Change the name of the mutex object to be unique.
41.184 – NAMTOOLON
piece of pathname too long - respecify Explanation: The user-supplied file specification is too long (greater than 255 characters). User Action: Reenter the file name with fewer than 255 characters.
41.185 – NEGATED
entity explicitly negated in command string Explanation: The specified qualifier or keyword is present in its negated form (prefixed with NO). User Action: No action is required.
41.186 – NEGSTRLEN
negative string length Explanation: The service completed successfully, except that a length parameter to a string routine had a negative value. Lengths of strings must always be positive or zero; zero is used. User Action: Verify that all parameters containing string lengths do not contain negative numbers.
41.187 – NEGTIM
a negative time was computed Explanation: The computed time was less than the date (17-NOV-1858). User Action: Contact your Oracle support representative for assistance.
41.188 – NFS
file specification on an NFS mounted device is not allowed Explanation: An NFS mounted device was referenced in the file specification. NFS mounted devices are not supported. User Action: Use a file name that does not reference an NFS mounted device.
41.189 – NOACLSUPPORT
ACLs not supported on selected object Explanation: ACLs are not supported for the specified object. User Action: Make sure that you have correctly specified the object.
41.190 – NOCCAT
parameter concatenation not allowed - check use of plus (+) Explanation: A command that accepts either a single-input value for a parameter or a list of input values separated by commas, contains multiple values concatenated by plus signs ( + ). User Action: Reenter the command with a single file specification. If necessary, enter the command once for each file.
41.191 – NOCOMD
no command on line - reenter with alphabetic first character Explanation: A command begins with a nonalphabetic character. User Action: Reenter the command with an alphabetic character at the beginning.
41.192 – NODEVDIR
filename does not include device and/or directory Explanation: The file specification you made did not include a device and directory. User Action: Include a device and/or directory in the file specification.
41.193 – NODUPEXC
equal-key routine and no-duplicates option cannot both be specified Explanation: Both an equal-key routine and the SOR$M_NODUPS option were specified when only one or the other option is allowed. User Action: Specify either the equal-key routine or the no-duplicates option.
41.194 – NOENTRY
access control entry not found Explanation: You have specified an access control entry that does not exist in the access control list. User Action: Add the desired access control entry to the access control list, or specify a different access control entry for your command.
41.195 – NOKEYW
qualifier name is missing - append the name to the delimiter Explanation: A qualifier delimiter is present on a command but is not followed by a qualifier keyword name. User Action: Reenter the command specifying the qualifier or removing the qualifier delimiter.
41.196 – NOLIST
list of parameter values not allowed - check use of comma (,) Explanation: A command that accepts only a single input value for a parameter contains multiple values separated by commas. User Action: Reenter the command. Specify only one file. If necessary, enter the command once for each file specified.
41.197 – NOLOCKID
no lock identification available Explanation: The system's lock identification table is full when a call to acquire a lock is made. User Action: Increase the size of the lock identification table. If the problem persists, contact your Oracle support representative for assistance.
41.198 – NOLOGNAM
no logical name match Explanation: A specified logical name does not exist. User Action: Verify the spelling of the logical name.
41.199 – NOMEMRESID
requires rights identifier VMS$MEM_RESIDENT_USER Explanation: Attempt to create a memory-resident global section without having the VMS$MEM_RESIDENT_USER identifier. User Action: Enable the VMS$MEM_RESIDENT_USER identifier for the process.
41.200 – NOMOREACE
access control list is exhausted Explanation: There are no more access control entries in the access control list. User Action: Do not perform this command when there are no more access control entries in the access control list.
41.201 – NOMORELOCK
no more locks Explanation: No lkid argument was specified, or the caller requested a wildcard operation by specifying a value of 0 or -1 for the lkidadr argument. The service that provides information about locks, however, has exhausted the locks about which it can return information. This is an alternate success status. User Action: No action is required.
41.202 – NONAME
file name specified where not permitted
41.203 – NONEXPR
nonexistent process Explanation: The process name or the process identifier specified is invalid. User Action: Contact your Oracle support representative for assistance.
41.204 – NOPAREN
value improperly delimited - supply parenthesis Explanation: A value supplied as part of a parenthesized value list for a parameter, qualifier, or keyword is missing a delimiting parenthesis. User Action: Reenter the command with the missing parenthesis.
41.205 – NOPRIV
no privilege for attempted operation Explanation: You do not have the appropriate privilege to perform this operation. User Action: See your database administrator, and request the appropriate privilege for the attempted operation.
41.206 – NOQUAL
qualifiers not allowed - supply only verb and parameters Explanation: A command that has no qualifiers is specified with a qualifier. User Action: Reenter the command. Do not specify any qualifiers.
41.207 – NORECATTRS
missing record specification Explanation: A fatal, internal error has occurred. User Action: Contact your Oracle support representative for assistance.
41.208 – NORECSIZE
record size not specified Explanation: A fatal, internal error has occurred. User Action: Contact your Oracle support representative for assistance.
41.209 – NOSHMEM
operation requires SHMEM privilege Explanation: A command requested a function that requires SHMEM privilege; the current process does not have this privilege. User Action: See your database administrator, and request the SHMEM privilege for the attempted operation.
41.210 – NOSPACE
maximum file size exceeded or file system full Explanation: A file update operation could not be completed. The file system is full, or the file exceeds the system-allowed maximum. User Action: Make space available on the device in question.
41.211 – NOSUCHDEV
no such device available Explanation: The specified device does not exist on the system. User Action: Examine the secondary messages to determine the reason for the failure.
41.212 – NOSUCHID
unknown rights identifier Explanation: The rights identifier that you have specified does not exist on this system. User Action: Specify only valid (known) rights identifiers.
41.213 – NOSUCHNET
no such network available Explanation: A DECnet connection was attempted on a system that does not support DECnet, or a TCP/IP connection was attempted on a system that does not support TCP/IP. User Action: Do not attempt to use a network that cannot be accessed.
41.214 – NOSUCHNODE
remote node is unknown Explanation: An attempt to make a network access failed because the remote node name does not exist or cannot be accessed. User Action: Check the remote node name for validity. If this error occurs when you use a valid node name, see your network administator.
41.215 – NOSUCHOBJECT
specified object does not exist Explanation: You are trying to get or set the security attributes (probably ACLs) for an object that does not exist. User Action: Make sure that you have correctly specified the object.
41.216 – NOSUCHSEC
named shared memory section does not exist Explanation: An attempt to map a shared memory section failed, because the the shared memory section does not exist. User Action: Contact your Oracle support representative for assistance.
41.217 – NOSUCHSRV
network service is unknown at remote node Explanation: An attempt to make a network access failed, because the service is not registered in the services database. User Action: See your network administator.
41.218 – NOTALLPRIV
not all requested privileges authorized Explanation: You have requested a privilege for which you are not authorized. User Action: Request only privileges for which you are authorized.
41.219 – NOTDISKFILE
file is not a disk file Explanation: A file name was specified which does not reference a disk- oriented device type. User Action: Check the file name for a proper disk device type.
41.220 – NOTINITED
COSI facility not initialized Explanation: An internal intialization error occurred. User Action: Contact your Oracle support representative for assistance.
41.221 – NOTNEG
qualifier or keyword not negatable - remove "NO" or omit Explanation: The word "no" preceded a qualifier or keyword, but the qualifier or keyword cannot be specified as a negative. User Action: Reenter the qualifier or keyword in a non-negated form.
41.222 – NOTNETDEV
not a network communication device Explanation: An attempt to be a network service provider has failed, because the process was not started by the DECnet spawner or inetd daemon. User Action: Check for a programming error. Verify that the device specified in the queue I/O request is a valid communications device.
41.223 – NOTQUEUED
request not queued Explanation: The lock request was made with a flag setting indicating that if the request cannot be granted synchronously, it should be ignored. User Action: Wait and retry the operation.
41.224 – NOTSYSCONCEAL
non-system concealed device name in filename Explanation: Concealed device names must be defined in the system logical table. User Action: If the device name has to be concealed, define it in the system logical table (LNM$SYSTEM_TABLE) or in the cluster-wide system logical name table (LNM$SYSCLUSTER_TABLE).
41.225 – NOTYPE
file type specified where not permitted
41.226 – NOVALU
value not allowed - remove value specification Explanation: A qualifier or keyword that does not accept a value is specified with a value. User Action: Reenter the command omitting a value for the qualifier or keyword.
41.227 – NOVER
file version specified where not permitted
41.228 – NOWILD
wild card specified where not permitted Explanation: One or more of these filename components was specified in a context in which it is not allowed. User Action: Review the command used and correct it.
41.229 – NO_SUCH_TOPIC
topic does not exist in help library Explanation: Help was asked for a topic that does not exist in the help library. User Action: Contact your Oracle support representative for assistance.
41.230 – NO_WRK
work files required - cannot do sort in memory as requested Explanation: The work-files=0 qualifier is specified, indicating the data would fit in memory, but the data is too large. User Action: Either increase the working set quota, or allow the sort utility to use two or more work files. If this message accompanies the MSGHLP error, SORTERR, see the description of that message for more information.
41.231 – NULFIL
missing or invalid file specification - respecify Explanation: The command interpreter expected a file specification, but no file specification was entered. User Action: Reenter the command. Place the file specification in the proper position.
41.232 – NUMBER
invalid numeric value - supply an integer Explanation: A numeric value is specified for a command that expects values in certain radices or interprets values within a particular context. For example, the number 999 is entered when an octal value is required, or an alphabetic value is specified in a context that requires a numeric value. User Action: Reenter the command using legal values.
41.233 – NUMELEMENTS
number of elements incorrect for component Explanation: An incorrect number of elements were specified for initialization of data and time format. User Action: Contact your Oracle support representative for assistance.
41.234 – NUM_KEY
too many keys specified Explanation: Up to 255 key definitions are allowed. Either too many key definitions have been specified or the NUMBER value is wrong. User Action: Check your command string key field specifications.
41.235 – NYI
functionality is not yet implemented Explanation: The functionality has not yet been implemented. User Action: Contact your Oracle support representative for assistance.
41.236 – ONEDELTIM
at least one delta time is required Explanation: The DATE and TIME services require at least one of the inputs to be a delta time. User Action: Contact your Oracle support representative for assistance.
41.237 – ONEVAL
list of values not allowed - check use of comma (,) Explanation: A qualifier, keyword, or parameter that accepts only a single value is specified with multiple values. User Action: Reenter the command specifying only one value.
41.238 – OPENERR
cannot open file Explanation: An error occurred during an attempt to open a file. User Action: Examine the secondary messages to determine the reason for the failure.
41.239 – OPENFILEMAPPINGERR
Error encountered while opening a file mapping object Explanation: A system error was encountered while opening a filemapping object User Action: Examine the secondary message to determine the reason for the failure.
41.240 – OPENMUTEXERR
Error encountered while opening mutex Explanation: A system error was encountered while opening mutex object User Action: Examine the secondary message to determine the reason for the failure.
41.241 – OPEN_ERR
an error was returned by the open function Explanation: An error was returned by the 'open' function. User Action: Please refer to the reference pages for 'open' to interpret the meaning of the errno returned by this function.
41.242 – OUTCONERR
output conversion error Explanation: The result would have exceeded the fixed-length string. User Action: Increase the length of the fixed-length string, and retry the operation.
41.243 – OUTSTRTRU
output string truncated Explanation: The source and destination strings are character-coded text datum, and the destination string cannot contain all of the output string. The result is truncated. User Action: No action is required.
41.244 – PARMDEL
invalid parameter delimiter - check use of special characters Explanation: A command contains an invalid character following the specification of a parameter, or an invalid character is present in a file specification. User Action: Check the command string for a spelling or grammatical error. Reenter the command.
41.245 – PARNOTGRANT
parent lock must be granted Explanation: A programming error occurred because an attempt was made to create a sublock under a parent lock that was not granted. User Action: This error message indicates a possible error in the locking protocols. Contact your Oracle support representative for assistance.
41.246 – PRESENT
entity value is present in command string Explanation: You do not have the appropriate privilege to perform this operation. User Action: See your database administrator, and request the appropriate privilege for the attempted operation.
41.247 – PROTERR
Error encountered during attempt to modify protection of a file Explanation: An error was encountered during an attempt to modify the protection of a file. User Action: Examine the secondary message to determine the reason for the failure. You may not have the necessary privileges to modify the protection for that file.
41.248 – PTHTOOLON
file path length too long - respecify Explanation: The user-supplied file specification is too long (greater than 255 characters). User Action: Reenter the file name with fewer characters.
41.249 – PWDEXPIRED
password has expired Explanation: The authentication of the user has failed because a password provided has expired and a new password is required to complete the request. User Action: The password for this user has expired and a new password is required. See your database administrator for help on changing your password.
41.250 – READERR
read error Explanation: An error occurred during a read from a mailbox or socket. User Action: Examine the secondary messages to determine the reason for the failure.
41.251 – READ_ERR
an error was returned by the read function Explanation: An error was returned by the 'read' function. User Action: Please refer to the reference pages for 'read' to interpret the meaning of the errno returned by this function.
41.252 – REMOTE
remote file specification is not allowed Explanation: A node name was found in the file specification. Node names cannot be used. User Action: Use a file name without a node specification.
41.253 – RESINUSE
requested resource already in use Explanation: Specified resource (event flag, message facility, etc.) is in use. User Action: Contact your Oracle support representative for assistance.
41.254 – RETRY
retry operation Explanation: This status is returned if the lock management services are performing some internal re-building of the lock tables when the caller requests a lock. User Action: This error message indicates a possible error in the locking protocols. Contact your Oracle support representative for assistance.
41.255 – RNF
record not found Explanation: A requested record could not be located. Either the record was never written or it has been deleted. User Action: Modify the program, if necessary, to detect and respond to the condition.
41.256 – RSLOVF
buffer overflow - specify fewer command elements Explanation: The command buffer has overflowed. User Action: Specifiy fewer command elements.
41.257 – RTNERROR
unexpected error status from user-written routine Explanation: A user-written comparison or equal-key routine returned an unexpected error status. User Action: Correct your comparison or equal-key routine.
41.258 – SETEUID_ERR
an error was returned by the seteuid function Explanation: An error was returned by the 'seteuid' function. User Action: Please refer to the reference pages for 'seteuid' to interpret the meaning of the errno returned by this function.
41.259 – SETREUID_ERR
an error was returned by the setreuid function Explanation: An error was returned by the 'setreuid' function. User Action: Please refer to the reference pages for 'setreuid' to interpret the meaning of the errno returned by this function.
41.260 – SETUID_ERR
an error was returned by the setuid function Explanation: An error was returned by the 'setuid' function. User Action: Please refer to the reference pages for 'setuid' to interpret the meaning of the errno returned by this function.
41.261 – SHMATERR
Error encountered during attach to shared memory Explanation: A system error was encountered during an attach to a shared memory segment that is used for concurrency and synchronization operations. User Action: Examine the secondary message to determine the reason for the failure.
41.262 – SHMCTLERR
Error encountered while controlling shared memory Explanation: A system error was encountered while controlling a shared memory region that was created for concurrency and synchronization operations. User Action: Examine the secondary message to determine the reason for the failure.
41.263 – SHMDTERR
Error encountered during detach from shared memory Explanation: A system error was encountered during a detach from a shared memory segment that is used for concurrency and synchronization operations. User Action: Examine the secondary message to determine the reason for the failure.
41.264 – SHMGETERR
Error encountered while creating shared memory Explanation: A system error was encountered during creation of a shared memory segment that is used for concurrency and synchronization operation. User Action: Examine the secondary message to determine the reason for the failure.
41.265 – SIGEXIT
process has died due to some signal Explanation: The process that is being looked at has died due to some signal. User Action: Check to see why the process died.
41.266 – SIGNAL
signal number <num>, code <num> Explanation: The unrecognized signal specified by the given number was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.267 – SIGNAL1
signal SIGHUP, code <num> Explanation: A 'hangup' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.268 – SIGNAL10
signal SIGBUS, code <num>, PC=!XA Explanation: A 'hardware fault' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.269 – SIGNAL11
signal SIGSEGV, code <num> Explanation: An 'invalid memory reference' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.270 – SIGNAL12
signal SIGSYS, code <num> Explanation: An 'invalid system call' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.271 – SIGNAL13
signal SIGPIPE, code <num> Explanation: A 'write to pipe with no readers' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.272 – SIGNAL14
signal SIGALRM, code <num> Explanation: A 'time out (alarm)' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.273 – SIGNAL15
signal SIGTERM, code <num> Explanation: A 'termination' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.274 – SIGNAL16
signal SIGURG, code <num> Explanation: An 'urgent condition' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. SIGIOINT (printer to backend error signal) is another name for this signal. User Action: Eliminate the cause of the error condition and retry the operation.
41.275 – SIGNAL17
signal SIGSTOP, code <num> Explanation: A 'stop' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. This exception is not expected to occur since the SIGSTOP signal cannot be caught. User Action: Eliminate the cause of the error condition and retry the operation.
41.276 – SIGNAL18
signal SIGTSTP, code <num> Explanation: A 'terminal stop character' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.277 – SIGNAL19
signal SIGCONT, code <num> Explanation: A 'continue stopped process' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.278 – SIGNAL2
signal SIGINT, code <num> Explanation: A 'terminal interrupt character' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.279 – SIGNAL20
signal SIGCHLD, code <num> Explanation: A 'change in status of child' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.280 – SIGNAL21
signal SIGTTIN, code <num> Explanation: A 'background read from control tty' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.281 – SIGNAL22
signal SIGTTOU, code <num> Explanation: A 'background write to control tty' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.282 – SIGNAL23
signal SIGIO, code <num> Explanation: An 'asynchronous I/O' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. SIGAIO (base lan i/o), SIGPTY (pty i/o), and SIGPOLL (STREAMS i/o) are other names for this signal. User Action: Eliminate the cause of the error condition and retry the operation.
41.283 – SIGNAL24
signal SIGXCPU, code <num> Explanation: A 'CPU time limit exceeded' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.284 – SIGNAL25
signal SIGXFSZ, code <num> Explanation: A 'file size limit exceeded' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.285 – SIGNAL26
signal SIGVTALRM, code <num> Explanation: A 'virtual time alarm' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.286 – SIGNAL27
signal SIGPROF, code <num> Explanation: A 'profiling time alarm' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.287 – SIGNAL28
signal SIGWINCH, code <num> Explanation: A ;terminal window size change' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.288 – SIGNAL29
signal SIGINFO, code <num> Explanation: A 'status request from keyboard' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. SIGPWR (Power Fail/Restart) is another name for this signal. User Action: Eliminate the cause of the error condition and retry the operation.
41.289 – SIGNAL3
signal SIGQUIT, code <num> Explanation: A 'terminal quit character' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.290 – SIGNAL30
signal SIGUSR1, code <num> Explanation: A 'user defined' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.291 – SIGNAL31
signal SIGUSR2, code <num> Explanation: A 'user defined' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation. BASEs 3881-3899 Reserved for future OSF signal types
41.292 – SIGNAL4
signal SIGILL, code <num>, PC=!XA Explanation: An 'illegal hardware instruction' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.293 – SIGNAL5
signal SIGTRAP, code <num>, PC=!XA Explanation: A 'hardware fault' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. Decimal overflow, Decimal divide by 0, Decimal invalid operand, Assertion error, Null pointer error, Stack overflow, String length error, Substring error, Range error, and Subscript [n] range error are specific conditions producing this signal. User Action: Eliminate the cause of the error condition and retry the operation.
41.294 – SIGNAL6
signal SIGABRT, code <num> Explanation: An 'abnormal termination (abort)' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. SIGIOT (abort (terminate) process) and SIGLOST are other names for this signal. User Action: Eliminate the cause of the error condition and retry the operation.
41.295 – SIGNAL7
signal SIGEMT, code <num> Explanation: A 'hardware fault' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. User Action: Eliminate the cause of the error condition and retry the operation.
41.296 – SIGNAL8
signal SIGFPE, code <num>, PC=!XA Explanation: An 'arithmetic exception' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. Integer overflow, Integer divide by 0, Floating overflow, Floating divide by 0, Floating underflow, Floating invalid operation, Floating inexact result, and Reserved Operand are specific conditions producing this signal. User Action: Eliminate the cause of the error condition and retry the operation.
41.297 – SIGNAL9
signal SIGKILL, code <num> Explanation: A 'termination' signal was raised as an exception in response to the arithmetic or software condition specified by the given code. This exception is not expected to occur since the SIGKILL signal cannot be caught. User Action: Eliminate the cause of the error condition and retry the operation.
41.298 – SSCANF_ERR
an error was returned by the sscanf function Explanation: An error was returned by the 'sscanf' function. User Action: Please refer to the reference pages for 'sscanf' to interpret the meaning of the errno returned by this function.
41.299 – STABLEEXC
equal-key routine and stable option cannot both be specified Explanation: Both an equal-key routine and the SOR$M_STABLE option was specified when only one or the other is allowed. User Action: Specify either the equal-key routine or the stable option.
41.300 – STAT_ERR
an error was returned by the stat function Explanation: An error was returned by the 'stat' function. User Action: Please refer to the reference pages for 'stat' to interpret the meaning of the errno returned by this function.
41.301 – STDOUTERR
error writing to stdout Explanation: This general message indicates an error during a write to STDOUT. User Action: Examine the secondary messages to determine the reason for the failure.
41.302 – STRTOOLON
string argument is too long - shorten Explanation: A string did not fit into the specified receiving area, resulting in lost trailing characters. User Action: Correct your program to increase the area specified to receive the string.
41.303 – STRTRU
string truncated Explanation: A string did not fit into the specified receiving area, resulting in lost trailing characters. User Action: Correct your program to increase the area specified to receive the string.
41.304 – SUBLOCKS
cannot dequeue a lock with sublocks Explanation: A programming error occurred because an attempt was made to unlock a lock that had sublocks under it. User Action: This error message indicates a possible error in the locking protocols. Contact your Oracle support representative for assistance.
41.305 – SUPERSEDE
logical name superseded Explanation: The logical name has been created and a previously existing logical name with the same name has been deleted. User Action: No action is required.
41.306 – SYNCH
synchronous successful completion Explanation: This alternate success code indicates that the requested operation completed synchronously and as expected. User Action: No action is required.
41.307 – SYSTEM_ERR
an error was returned by the system function Explanation: An error was returned by the 'system' function. User Action: Please refer to the reference pages for 'system' to interpret the meaning of the errno returned by this function.
41.308 – TIMETRU
time hundreths of seconds truncated Explanation: A time was specified that had hundreths of seconds. This is not supported on. User Action: Do not specify hundreths of seconds in time literals.
41.309 – TKNOVF
command element is too long - shorten Explanation: The command element buffer has overflowed. User Action: Shorten the command element and retry.
41.310 – TRU
truncation Explanation: An attempt was made to place more characters into a string than it could contain. The value is truncated on the right to fit. User Action: Do not exceed the maximum string length. Ignore this error if right truncation is acceptable.
41.311 – TRUNCERR
error truncating file Explanation: An error occurred during truncation of the size of the specified file. User Action: Examine the secondary messages to determine the reason for the failure.
41.312 – UNDOPTION
undefined option flag was set Explanation: Only those option flags used by SORT MERGE can be set. All other bits in the longword are reserved and must be zero. User Action: Correct your specification file.
41.313 – UNEXPERR
unexpected system error Explanation: Some unexpected error occurred during execution of the software. User Action: Contact your Oracle support representative for assistance.
41.314 – UNKNOWN_USER
unknown user Explanation: An authentication routine cannot identify the user. User Action: Use the USER and USING clauses to specify a valid user.
41.315 – UNLINK_ERR
an error was returned by the unlink function Explanation: An error was returned by the 'unlink' function. User Action: Please refer to the reference pages for 'unlink' to interpret the meaning of the errno returned by this function.
41.316 – UNMAPVIEWOFFILEERR
Error encountered while unmapping a view of file Explanation: A system error was encountered while unmapping a view of the file from the virtual address space. User Action: Examine the secondary message to determine the reason for the failure.
41.317 – UNRFORCOD
unrecognized format code Explanation: The format code is not recognized. User Action: Examine the format string for invalid format code. The format string may be supplied in the environment variable or it can be hard-coded.
41.318 – UNSUPP_HW_CPUCNT
unsupported hardware CPU count Explanation: The system CPU count (number of processors in the computer) is not supported by this version. User Action: Contact your Customer service center. There may be a version that supports your configuration.
41.319 – UNSUPP_HW_EV6
unsupported hardware DECchip 21264 or variant Explanation: The hardware DECchip 21264 or variant (EV6 microprocessor) is not supported by this version. User Action: Contact your Customer service center. There may be a version that supports your hardware.
41.320 – UNSUPP_HW_EV7
unsupported hardware DECchip 21364 or variant Explanation: The hardware DECchip 21364 or variant (EV7 microprocessor) is not supported by this version. User Action: Contact your Customer service center. There may be a future version that supports your hardware.
41.321 – UNSUPP_HW_EVX
unsupported hardware DECchip variant Explanation: The hardware DECchip variant microprocessor is not supported by this version. User Action: Contact your Customer service center. There may be a version that supports your hardware.
41.322 – UNSUPP_HW_I64
unsupported hardware processor model Explanation: The Intel Itanium processor family or model is not supported by this version. User Action: Contact your Customer service center. There may be a version that supports your hardware.
41.323 – VALNOTVALID
value block is not valid Explanation: This warning message is returned if the caller has specified the VALBLK flag in the flags argument to the service to request locks. Note that the lock has been successfully granted despite the return of this warning message. User Action: This error message indicates a possible error in the locking protocols. Contact your Oracle support representative for assistance.
41.324 – VALREQ
missing qualifier or keyword value - supply all required values Explanation: A keyword or qualifier that requires a value was specified without a value. User Action: Specify the required value, and retry the command.
41.325 – VASFULL
virtual address space full Explanation: An attempt to map a section of a file or a shared memory region failed because (1) there is not enough address space to map all the bytes, or (2) the specific address range specified is already allocated. User Action: Contact your Oracle support representative for assistance.
41.326 – VIRTUALALLOCERR
Error encountered while reserving/commiting pages Explanation: A system error was encountered while commiting or reserving a block of pages in the virtual address space User Action: Examine the secondary message to determine the reason for the failure.
41.327 – VIRTUALFREEERR
Error encountered while releasing/de-commiting pages Explanation: A system error was encountered while de-commiting or releasing a block of pages in the virtual address space User Action: Examine the secondary message to determine the reason for the failure.
41.328 – WAITPID_ERR
an error was returned by the waitpid function Explanation: An error was returned by the 'waitpid' function. User Action: Please refer to the reference pages for 'waitpid' to interpret the meaning of the errno returned by this function.
41.329 – WASCLR
normal successful completion Explanation: The specified event flag was previously 0. User Action: No action is required.
41.330 – WASSET
normal successful completion Explanation: The specified event flag was previously 1. User Action: No action is required.
41.331 – WORK_DEV
work file <str> must be on random access local device Explanation: Work files must be specified for random access devices that are local to the CPU on which the sort is being performed (that is, not on a node in a network). Random access devices are disk devices. User Action: Specify the correct device.
41.332 – WRITERR
write error Explanation: An error occurred during a write operation to a file, mailbox, or socket. User Action: Examine the secondary messages to determine the reason for the failure.
41.333 – WRITE_ERR
an error was returned by the write function Explanation: An error was returned by the 'write' function. User Action: Please refer to the reference pages for 'write' to interpret the meaning of the errno returned by this function.
41.334 – WRONGSTATE
invalid state for requested operation Explanation: A software protocol error has occurred. The error might be a value specified for a system function that is not valid at this time or a function that cannot be used at this time. For example, the error could be an attempt to read from an I/O channel that is closed. The identical read call would be valid after the channel was open. User Action: Determine the system call that returned the error. Verify that the service is being called correctly.
41.335 – WRONUMARG
wrong number of arguments, <num>, to <str> Explanation: A string facility entry is called with an incorrect number of arguments. User Action: A user who calls the string facility directly should check the argument list in the call.
41.336 – XVALNOTVALID
extended value block is not valid Explanation: This warning occurs as the result of a programming decision. The program read the Extended Lock Value Block, but the previous writer wrote a Short Lock Value Block. This warning message is returned if the caller has specified the XVALBLK flag in the flags argument to the service to request locks. Note that the lock has been successfully granted despite the return of this warning message. User Action: This error message indicates a possible error in the locking protocols. Contact your Oracle support representative for assistance.