Oracle DBMS_LOGMNR Built-In Package
Versions 8.1.5 - 19.3

Security Advisory
DBMS_LOGMNR is like a match. It can be used to light a stove and cook a delicious meal. It can also be used to burn the house down. Oracle attempts protect this package from misuse by only granting the EXECUTE privilege to the EXECUTE_CATALOG_ROLE role but that strategy is weak as you can see below

SQL> SELECT UNIQUE grantee
  2  FROM dba_role_privs
  3  WHERE granted_role IN ('EXECUTE_CATALOG_ROLE', 'EXP_FULL_DATABASE', 'IMP_FULL_DATABASE', 'DATAPUMP_EXP_FULL_DATABASE', 'DATAPUMP_IMP_FULL_DATABASE')
  4 * ORDER BY 1;

GRANTEE
------------------------------
DATAPUMP_EXP_FULL_DATABASE
DATAPUMP_IMP_FULL_DATABASE

DBA
EXP_FULL_DATABASE
GSMADMIN_INTERNAL
GSMUSER_ROLE
IMP_FULL_DATABASE
SYS


We will have not unrolled GSMUSER_ROLE and we have EXECUTE_CATALOG_ROLE granted to SYS (bold) and three other roles (green) and they are granted to another user, GSMADMIN_INTERNAL (bold) and three more roles (yellow), and the chain only gets longer when we try to figure out where those roles go and track it all back to specific users that have rights to DBMS_LOGMNR. We can demand that a DBA never grant EXECUTE_CATALOG_ROLE to scott/tiger but it isn't reasonable to expect the DBA to show the same level of attention if someone says they need to log in as GSMUSER to configure something in an application. And who, on the CISO's team, is fully aware of the need, if any, for someone to log in as GSMUSER?

So, why are we so concerned about security and this package? Review the "How Oracle Works" section below and, as you do, consider the fact that in every database product, not just Oracle, not just relational, all transactions are written to a decodable transaction log. What makes Oracle safer than most is that you can discover the vulnerability and make your database safer from misuse.
 
Recommended Security Rules

 NEVER
  • Do not grant execute on this package to any user or role except specific individual's that, as part of their regular job responsibility, are required to mine redo logs for forensic information on transactions.
 WITH GREAT CARE
  • Create a vetting process to approve requiring management approval and regular reviews and use that process only for granting access to this package.
 CAUTIONS
  • This package can be used to circumvent protections implemented with Database Vault, encryption, and other tools so tool usage needs to be carefully controlled and usage monitored.
 
How Oracle Works
Enables online query online of archived redo log files through a SQL interface and provides the tools required to start, perform, and stop log mining. For archived redo logs to be mined they must be created which requires that the database be in ARCHIVELOG mode ... which is the way every test and production database should be run by default. The following steps assume that the database is not in ARCHIVELOG mode.

conn sys@pdbdev as sysdba

SQL> SELECT log_mode FROM v$database;

LOG_MODE
------------
NOARCHIVELOG

SQL> SHUTDOWN IMMEDIATE;

SQL> STARTUP MOUNT;

ALTER DATABASE ARCHIVELOG;

ALTER DATABASE OPEN;

SQL> SELECT log_mode FROM v$database;

LOG_MODE
------------
ARCHIVELOG

This next sequence of steps demonstrates the ability to force an redo log switch which generates a new archived redo log. This is not necessary to perform log mining but greatly simplifies the task for demonstration purposes.

desc gv$log

SQL> desc gv$log
Name            Null?  Type
--------------- ------ ------------
INST_ID                NUMBER
GROUP#                 NUMBER
THREAD#                NUMBER
SEQUENCE#              NUMBER
BYTES                  NUMBER
BLOCKSIZE              NUMBER
MEMBERS                NUMBER
ARCHIVED               VARCHAR2(3)
STATUS                 VARCHAR2(16)
FIRST_CHANGE#          NUMBER
FIRST_TIME             DATE
NEXT_CHANGE#           NUMBER
NEXT_TIME              DATE
CON_ID                 NUMBER

SELECT group#, thread#, sequence#, members, status
FROM v$log;

SQL> SELECT group#, thread#, sequence#, members, status
2 FROM v$log;

 GROUP#  THREAD#  SEQUENCE#  MEMBERS STATUS
------- -------- ---------- -------- ---------
      1        1         13        1 INACTIVE
      2        1         14        1 CURRENT
      3        1         12        1 INACTIVE

ALTER SYSTEM switch logfile;

System altered.

SELECT group#, thread#, sequence#, members, status
FROM gv$log;

SQL> SELECT group#, thread#, sequence#, members, status
2 FROM gv$log;

 GROUP#  THREAD#  SEQUENCE#  MEMBERS STATUS
------- -------- ---------- -------- ---------
      1        1         13        1 INACTIVE
      2        1         14        1 ACTIVE
      3        1         15        1 CURRENT

Log mining onsists of using the DBMS_LOGMNR package to read redo into an in-memory object  and querying the contents using SQL statements. As shown in the next section ... v$logmnr_contents cannot be queried until DBMS_LOGMNR is started.

desc v$logmnr_contents

SQL> SELECT COUNT(*)
  2  FROM v$logmnr_contents;
FROM v$logmnr_contents
*
ERROR at line 2:
ORA-01306: dbms_logmnr.start_logmnr() must be invoked before selecting from v$logmnr_contents


To demonstrate log mining we are creating a simple table with PII information and a Multi-Factor Authentication (MFA) response. What is mined is the transaction information in the redo logs, the same logs that are shipped to a DR site by Data Guard, the same logs that are backed up by RMAN. At rest encryption will have no on what is logged and available for mining.

CREATE TABLE uwclass.pii_info(
person_id   NUMBER,
f_name      VARCHAR2(20),
l_name      VARCHAR2(20),
location    VARCHAR2(20),
MFA_answer  VARCHAR2(20));

The steps we are going to take next are as follows:
1. switch log files so subsequent transactions are isolated to make the demo simpler
2. capture the current system change number (beginning SCN)
3. log in as the application user, insert one row into the table and commit the transaction
4. log back in as the DBA
5. capture the current system change number (ending SCN)
6. switch log files so that activities in steps 2-5 are isolated (along with a small amount of playing around)


ALTER SYSTEM switch logfile;

-- capture starting SCN
SELECT current_scn
FROM v$database;

CURRENT_SCN
-----------
    3061281

SQL> conn uwclass/uwclass

INSERT INTO uwclass.pii_info
(person_id, f_name, l_name, location, mfa_answer)
VALUES
(1, 'Dan', 'Morgan', 'Tahiti', 'Drusilla');

COMMIT;

SQL> conn sys@pdbdev as sysdba

-- capture ending SCN
SQL> SELECT current_scn
  2  FROM v$database;

CURRENT_SCN
-----------
    3061345

ALTER SYSTEM switch logfile;

Before we can begin mining we look in $ORACLE_BASE/fast_recovery_area/<database_name>/ARCHIVELOG/ <YYYY_MM_DD> and use the name of the most recent archived redo log for the mining.

DBMS_LOGMNR_ADD_LOGFILE queues the log file for reading. If we wished to review transactions in multiple log files each file would require a separate ADD_LOGFILE statement.


SQL> exec sys.dbms_logmnr.add_logfile('c:\u01\orabase19\fast_recovery_area\ ORABASEXIX\ARCHIVELOG\2019_07_07\O1_MF_1_19_GL49V0X1_.ARC');

PL/SQL procedure successfully completed.

Next begin redo log mining.

SQL> exec sys.dbms_logmnr.start_logmnr(3061281,3061345, options=>2);

PL/SQL procedure successfully completed.

At which point we can query v$logmnr_contents to identify the SCN of the transaction(s) of interest.

col object_name format a30

SQL> SELECT v.scn, v.commit_timestamp, v.table_name, o.object_name, v.operation
  2  FROM sys.v_$logmnr_contents v, dba_objects_ae o
  3  WHERE SUBSTR(v.table_name,6) = o.object_id;

     SCN COMMIT_TIMESTAMP     TABLE_NAME    OBJECT_NAME    OPERATION
-------- -------------------- ------------- -------------- ------------
 3061305 07-JUL-2019 12:08:40 OBJ# 22       USER$          UNSUPPORTED
 3061308 07-JUL-2019 12:08:44 OBJ# 22       USER$          UNSUPPORTED
 3061320 07-JUL-2019 12:09:15 OBJ# 14       SEG$           UNSUPPORTED
 3061321 07-JUL-2019 12:09:15 OBJ# 14       SEG$           UNSUPPORTED
 3061321 07-JUL-2019 12:09:15 OBJ# 4        TAB$           UNSUPPORTED
 3061320 07-JUL-2019 12:09:15 OBJ# 81       DEFERRED_STG$  DELETE
 3061301                      OBJ# 145      SYSAUTH$       INSERT
 3061285 07-JUL-2019 12:08:22 OBJ# 73978    AUD$UNIFIED    INSERT
 3061326 07-JUL-2019 12:09:19 OBJ# 74268    PII_INFO       INSERT

6 rows selected.

Describe v$logmnr_contents. There is a large amount of information loaded that can be used in conjunction with other tools and objects to pull explain plans, bind variables, and other information that should be highly secure. For our purposes we will look at only 4 columns in the view.

col seg_owner format a10
col table_name format a11
col sql_redo format a34

SQL> SELECT seg_owner, table_name, row_id, sql_redo
  2  FROM sys.v_$logmnr_contents
  3* WHERE scn = 3061326;

SEG_OWNER  TABLE_NAME  ROW_ID             SQL_REDO
---------- ----------- ------------------ ---------------------------------------
                       AAASIcAAAAAAAAAAAA set transaction read write;
UNKNOWN    OBJ# 74268  AAASIcAAOAAAACdAAA insert into "UNKNOWN"."OBJ# 74268"
                                          ("COL 1","COL 2","COL 3","COL 4",
                                          "COL 5") values (HEXTORAW('c102'),H
                                          EXTORAW('44616e'),HEXTORAW('4d6f72
                                          67616e'),HEXTORAW('546168697469'),
                                          HEXTORAW('44727573696c6c61'));

The table has 5 columns and we can clearly read the 5 values inserted.

Finally, stop log mining.

exec sys.dbms_logmnr.end_logmnr;

PL/SQL procedure successfully completed.
Decoding HEX to human readable form. The HEXTORAW function is used to convert the HEX values that are physically stored in an Oracle table into RAW and the UTL_RAW package's CAST_TO_VARCHAR2 used to transform the RAW values into a human readable form.

col f_name format a10
col l_name format a10
col location format a20
col mfa_answer format a20

SELECT utl_raw.cast_to_varchar2(HEXTORAW('44616e')) AS F_NAME,
       utl_raw.cast_to_varchar2(HEXTORAW('4d6f7267616e')) AS L_NAME,
       utl_raw.cast_to_varchar2(HEXTORAW('546168697469')) AS LOCATION,
       utl_raw.cast_to_varchar2(HEXTORAW('44727573696c6c61')) AS MFA_ANSWER
FROM dual;

F_NAME     L_NAME     LOCATION   MFA_ANSWER
---------- ---------- ---------- ----------
Dan        Morgan     Tahiti     Drusilla

Consider your current security measures. Are your redo logs readable by anyone that has a database on a laptop and knows how to perform log mining? 90+% of all Oracle customers are at risk. Which is not as bad as Microsoft SQL Server where up until SQL Server 2016 100% of all customers were at risk and likely 99+% still are.
 
DBMS_LOGMNR Package Information
AUTHID DEFINER
Constants
Name Data Type Value
Add archive log option flags
NEW BINARY_INTEGER 1
REMOVEFILE BINARY_INTEGER 2
ADDFILE BINARY_INTEGER 3
Status column option flags
VALID_SQL BINARY_INTEGER 0
INVALID_SQL BINARY_INTEGER 2
UNGUARANTEED_SQL BINARY_INTEGER 3
CORRUPTED_BLK_IN_REDO BINARY_INTEGER 4
ASSEMBLY_REQUIRED_SQL BINARY_INTEGER 5
HOLE_IN_LOGSTREAM BINARY_INTEGER 1291

Start LOGMNR option flags
Name Data Type Value Description
COMMITTED_DATA_ONLY BINARY_INTEGER 2 If set, DML statements corresponding to committed transactions are returned. DML statements corresponding to a committed transaction are grouped together. Transactions are returned in their commit order. Transactions that are rolled back or in-progress are filtered out, as are internal redo records (those related to index operations, management, and so on).

If this option is not set, all rows for all transactions (committed, rolled back, and in-progress) are returned in the order in which they are found in the redo logs (in order of SCN values).
CONTINUOUS_MINE BINARY_INTEGER 1024 Directs LogMiner to automatically add redo log files, as needed, to find the data of interest. You only need to specify the first log to start mining, or just the starting SCN or date to indicate to LogMiner where to begin mining logs. You are not required to specify any redo log files explicitly. LogMiner automatically adds and mines the (archived and online) redo log files for the data of interest. This option requires that LogMiner is connected to the same database instance that is generating the redo log files. It also requires that the database be mounted and that archiving be enabled.

Beginning with Oracle Database release 10.1, the CONTINUOUS_MINE options is supported for use in an Oracle Real Application Clusters environment.
DDL_DICT_TRACKING BINARY_INTEGER 8 If the LogMiner dictionary in use is a flat file or in the redo log files, LogMiner updates its internal dictionary if a DDL event occurs. This ensures that correct SQL_REDO and SQL_UNDO information is maintained for objects that are modified after the LogMiner internal dictionary is built. The database to which LogMiner is connected must be open.

This option cannot be used in conjunction with the DICT_FROM_ONLINE_CATALOG option.
DICT_FROM_ONLINE_CATALOG BINARY_INTEGER 16 Directs LogMiner to use the current online database dictionary rather than a LogMiner dictionary contained in a flat file or in the redo log files being analyzed.

This option cannot be used in conjunction with the DDL_DICT_TRACKING option. The database to which LogMiner is connected must be the same one that generated the redo log files.

Expect to see a value of 2 in the STATUS column of the GV$LOGMNR_CONTENTS view if the table definition in the database does not match the table definition in the redo log file.
DICT_FROM_REDO_LOGS BINARY_INTEGER 32 If set, LogMiner expects to find a LogMiner dictionary in the redo log files that were specified. The redo log files are specified with the DBMS_LOGMNR.ADD_LOGFILE procedure or with the DBMS_LOGMNR.START_LOGMNR procedure with the CONTINUOUS_MINE option.
NO_DICT_RESET_ONSELECT BINARY_INTEGER 1 Will be deprecated soon
NO_ROWID_IN_STMT BINARY_INTEGER 2048 If set, the ROWID clause is not included in the reconstructed SQL statements. The redo log file may already contain logically unique identifiers for modified rows if supplemental logging is enabled.

When using this option, you must be sure that supplemental logging was enabled in the source database at the appropriate level and that no duplicate rows exist in the tables of interest. LogMiner does not make any guarantee regarding the uniqueness of logical row identifiers.
NO_SQL_DELIMITER BINARY_INTEGER 64 If set, the SQL delimiter (a semicolon) is not placed at the end of reconstructed SQL statements. This is helpful for applications that open a cursor and then execute the reconstructed statements.
PRINT_PRETTY_SQL BINARY_INTEGER 512 If set, LogMiner formats the reconstructed SQL statements for ease of reading. These reconstructed SQL statements are not executable.
SKIP_CORRUPTION BINARY_INTEGER 4 Directs a select operation on the GV$LOGMNR_CONTENTS view to skip any corruptions in the redo log file being analyzed and continue processing. This option works only when a block in the redo log file (and not the header of the redo log file) is corrupt. You should check the INFO column in the GV$LOGMNR_CONTENTS view to determine the corrupt blocks skipped by LogMiner. When a corruption in the redo log file is skipped, the OPERATION column contains the value CORRUPTED_BLOCKS, and the STATUS column contains the value 1343.
STRING_LITERALS_IN_STMT BINARY_INTEGER 4096 Undocumented
Data Types SUBTYPE Length IS BINARY_INTEGER;

SUBTYPE ThreadId IS BINARY_INTEGER;

-- workarounds for the lack of constrained subtypes
LogFileNameTemplate VARCHAR2(256);
SUBTYPE LogFileName IS LogFileNameTemplate%TYPE;

LogFileDescTemplate VARCHAR2(256);
SUBTYPE LogFileDescription IS LogFileDescTemplate%TYPE;
Dependencies
DBA_LOGMNR_LOG GV_$LOGMNR_CALLBACK GV_$LOGMNR_PARAMETERS
DBA_LOGMNR_PURGED_LOG GV_$LOGMNR_CONTENTS GV_$LOGMNR_PROCESS
DBA_LOGMNR_SESSION GV_$LOGMNR_DICTIONARY GV_$LOGMNR_REGION
DBMS_APPLY_ADM_INTERNAL GV_$LOGMNR_DICTIONARY_LOAD GV_$LOGMNR_SESSION
DBMS_LOGMNR_INTERNAL GV_$LOGMNR_LATCH GV_$LOGMNR_STATS
DBMS_STREAMS_ADM_IVK GV_$LOGMNR_LOGFILE GV_$LOGMNR_TRANSACTION
DBMS_STREAMS_ADM_UTL GV_$LOGMNR_LOGS  
Documented Yes
Exceptions
Error Code Reason
ORA-00904 Value specified for the column_name parameter is not a fully qualified column name.
ORA-01281 startScn or endSCN parameter specified is not a valid SCN or endScn is greater then startScn
ORA-01282 startTime parameter not between years 1988 and 2110 or endTime parameter is greater than year 2110
ORA-01283 The value specified in the Options parameter is not a NUMBER or is not a known LogMiner Adhoc option
ORA-01284 Specified dictionary file in DictFileName parameter has a length greater then 256 or cannot be opened
ORA-01285 DictFileName parameter is not a valid VARCHAR2
ORA-01286 Options specified require start time or start SCN
ORA-01287 Specified file is from a different database incarnation
ORA-01289 Specified file has already been added to the list. Duplicate redo log files cannot be added.
ORA-01290 Specified file is not in the current list and therefore cannot be removed from the list.
ORA-01291 Redo files needed to satisfy the user's requested SCN/time range are missing.
The user can specify ALLOW_MISSING_LOGS option. Missing logs are not allowed when DDL tracking is in use.
ORA-01292 No log file has been registered with LogMiner
ORA-01293 Mounted database required for options specified (CONTINUOUS_MINE)
ORA-01294 Error while processing the data dictionary extract
ORA-01295 DB_ID of the data dictionary does not match that of the redo logs
ORA-01296 Character set specified in the data dictionary does not match (is incompatible with) that of the database
ORA-01297 Redo version mismatch between the dictionary and the registered redo logs
ORA-01298 More than one dictionary source was specified or DDL_DICT_TRACKING was requested with DICT_FROM_ONLINE_CATALOG
ORA-01299 Dictionary is from a different database incarnation
ORA-01300 Writable database required for options specified (DDL_DICT_TRACKING, DICT_FROM_REDO_LOGS,
DICT_FROM_ONLINE_CATALOG)
ORA-01323 A LogMiner dictionary is not associated with the LogMiner session
ORA-01324 Specified file cannot be added to the list because there is a DB_ID mismatch.
ORA-01371 A logfile containing the dictionary dump to redo logs is missing
First Available 8.1.5
Pragmas pragma TIMESTAMP('1998-05-05:11:25:00');
Security Model Owned by SYS with EXECUTE granted to the EXECUTE_CATALOG_ROLE role
Source {ORACLE_HOME}/rdbms/admin/dbmslm.sql
Subprograms
 
ADD_LOGFILE
Register log files to be analyzed dbms_logmnr.add_logfile(
logfilename IN VARCHAR2,
options     IN BINARY_INTEGER DEFAULT ADDFILE);
See MINE_VALUE Demo Below
 
COLUMN_PRESENT
Call this function for any row returned from the V$LOGMNR_CONTENTS view to determine if undo or redo column values exist for the column specified by the column_name input parameter to this function -- designed to be used in conjunction with the MINE_VALUE function
dbms_logmnr.column_present(
sql_redo_undo IN NUMBER   DEFAULT 0,
column_name   IN VARCHAR2 DEFAULT '')
RETURN BINARY_INTEGER;
See MINE_VALUE Demo Below
 
END_LOGMNR
Completes a log miner session dbms_logmnr.end_logmnr;
See MINE_VALUE Demo Below
 
INIT_REPLICATION_METADATA (new 19c)
Undocumented dbms_logmnr.init_replication_metadata;
exec dbms_logmnr.init_replication_metadata;
 
MINE_VALUE
Facilitates queries based on a column's data value dbms_logmnr.mine_value(
sql_redo_undo IN NUMBER DEFAULT 0,
column_name IN VARCHAR2 DEFAULT '')
RETURN VARCHAR2;
conn / as sysdba

SHUTDOWN IMMEDIATE;

STARTUP MOUNT;

ALTER DATABASE ARCHIVELOG;

ALTER DATABASE OPEN;

GRANT select ON v$logmnr_contents TO uwclass;

desc gv$log

SELECT group#, thread#, sequence#, members, status
FROM v$log;

ALTER SYSTEM switch logfile;

SELECT group#, thread#, sequence#, members, status
FROM gv$log;

desc v$logmnr_contents

SELECT COUNT(*)
FROM v$logmnr_contents;

GRANT select ON v_$database TO uwclass;

conn uwclass/uwclass

-- capture starting SCN
SELECT current_scn
FROM v$database;
-- 7466113

UPDATE airplanes
SET customer_id = 'FIND'
WHERE line_number = 13397;

COMMIT;

-- capture ending SCN
SELECT current_scn
FROM v$database;
-- 7466134

ALTER SYSTEM switch logfile;

-- copy control file to c:\temp\demo.arc

exec sys.dbms_logmnr.add_logfile('c:\temp\demo1.arc');
exec sys.dbms_logmnr.add_logfile('c:\temp\demo2.arc');
exec sys.dbms_logmnr.add_logfile('c:\temp\demo3.arc');
exec sys.dbms_logmnr.start_logmnr(7466113,7466134);

exec sys.dbms_logmnr.start_logmnr(7466113, 7466134, options=>2);

col object_name format a30

desc v$logmnr_contents;
-- note abs_file#, rel_file#, data_blk#

SELECT v.scn, v.commit_timestamp, v.table_name, o.object_name, v.operation
FROM sys.v_$logmnr_contents v, dba_objects_ae o
WHERE SUBSTR(v.table_name,6) = o.object_id;

SELECT sql_redo
FROM sys.v_$logmnr_contents;

SELECT sql_undo
FROM sys.v_$logmnr_contents;

exec sys.dbms_logmnr.end_logmnr;

conn / as sysdba

SHUTDOWN IMMEDIATE;

STARTUP MOUNT EXCLUSIVE;

ALTER DATABASE NOARCHIVELOG;

ALTER DATABASE OPEN;

SELECT info
FROM gv$logmnr_contents;

SELECT sql_redo, sql_undo
FROM gv$logmnr_contents
WHERE username = 'UWCLASS';

SELECT utl_raw.cast_to_varchar2(HEXTORAW('53414c')) FROM dual;
NWO  HEXTORAW('4e574f')
USAF HEXTORAW('55534146')
DAL  HEXTORAW('44414c')
SAL  HEXTORAW('53414c')

SELECT sql_redo
FROM sys.v_$logmnr_contents
WHERE seg_name = 'AIRPLANES'
AND seg_owner = 'UWCLASS'
AND operation = 'UPDATE'
AND sys.dbms_logmnr.mine_value(REDO_VALUE, 'CUSTOMER_ID') <>  sys.dbms_logmnr.mine_value(UNDO_VALUE, 'CUSTOMER_ID');

exec sys.dbms_logmnr.end_logmnr;
 
PROFILE (new 19c)
Undocumented dbms_logmnr.profile(
options          IN BINARY_INTEGER DEFAULT 0,
schema           IN VARCHAR2       DEFAULT '',
startSCN         IN NUMBER         DEFAULT 0,
endSCN           IN NUMBER         DEFAULT 0,
startTime        IN DATE           DEFAULT '',
endTime          IN DATE           DEFAULT '',
threads          IN VARCHAR2       DEFAULT '',
logLocation      IN VARCHAR2       DEFAULT '',
logNameSpecifier IN VARCHAR2       DEFAULT '');
exec dbms_logmnr.profile(endTime => SYSDATE +1);
 
REMOVE_LOGFILE
Removes a redo log file from an existing list of redo log files for Log Miner to process dbms_logmnr.remove_logfile(LogFileName IN VARCHAR2);
exec dbms_logmnr.remove_logfile('c:\temp\demo1.arc');
 
START_LOGMNR
Begin a log miner session dbms_logmnr.start_logmnr(
startscn      IN NUMBER         DEFAULT 0,
endscn        IN NUMBER         DEFAULT 0,
starttime     IN DATE           DEFAULT '',
endtime       IN DATE           DEFAULT '',
dictfilename  IN VARCHAR2       DEFAULT '',
options       IN BINARY_INTEGER DEFAULT 0);
See MINE_VALUE Demo Above

Related Topics
DBMS_LOGMNR_D
UTL_RAW