CertCities.com -- The Ultimate Site for Certified IT Professionals
Free CertCities.com Newsletter via E-mail Share share | bookmark | e-mail
  Microsoft®
  Cisco®
  Security
  Oracle®
  A+/Network+"
  Linux/Unix
  More Certs
  Newsletters
  Salary Surveys
  Forums
  News
  Exam Reviews
  Tips
  Columns
  Features
  PopQuiz
  RSS Feeds
  Press Releases
  Contributors
  About Us
  Search
 

Advanced Search
  Free Newsletter
  Sign-up for the #1 Weekly IT
Certification News
and Advice.
Subscribe to CertCities.com Free Weekly E-mail Newsletter
CertCities.com

See What's New on
Redmondmag.com!

Cover Story: IE8: Behind the 8 Ball

Tech-Ed: Let's (Third) Party!

A Secure Leap into the Cloud

Windows Mobile's New Moves

SQL Speed Secrets


CertCities.com
Let us know what you
think! E-mail us at:



 
 
...Home ... Editorial ... Columns ..Column Story Saturday: April 5, 2014


 SELECTions  
Damir Bersinic
Damir Bersinic


 LogMiner: From Birth to Puberty
This Oracle utility may be improving, but it still has some growing up to do.
by Damir Bersinic  
10/2/2002 -- One of the most interesting features found in Oracle 8i (and updated in every release since then) is the ability to extract information from the redo log files and get a sense of what's happening in the database chronologically. This is possible with the birth of the LogMiner utility. LogMiner promises to allow you to reconstruct SQL statements found in the log file to either re-apply the changes made or use the generated SQL to reverse them (SQLREDO and SQLUNDO, respectively).

Like any child at its birth, LogMiner delivered more potential than ability. Its initial release suffered from many restrictions, including:

  • One record for each row changed: If you issued a statement like UPDATE CUSTOMERS SET TAX_RATE=7 WHERE STATE='ON' and this affected 100 rows, you would get 100 redo records instead of a single SQL statement when you analyzed the log file. Basically, it made it very hard to figure out what actually occurred in the database without looking at other elements during LogMiner analysis.
  • V$LOGMNR_CONTENTS data was only visible to the analyzing session: If you needed to save the results of your analysis, you had to create a permanent table using the CREATE TABLE ... AS SELECT * FROM V$LOGMNR_CONTENTS syntax; otherwise, you'd have to perform the analysis again since once you ended the analysis, the analysis data was also gone.
  • An external dictionary file was required: LogMiner needed you to create a dictionary file. This file exported the data dictionary from the database on which the analysis was being performed to an external file that was located in one of the folders pointed to by the UTL_FILE_DIR Oracle initialization parameter. While having an external dictionary file can be beneficial as it allows you to analyze a set of log files from database A on database B, it also required you to set these parameters up properly, restart the instance (and we can all do that when we want), and have access to the database server computer (not always possible in larger organizations where different groups handle the sysadmin and DBA tasks).
  • Chained and migrated rows could not be reconstructed: If you had many VARCHAR2 columns, this restriction essentially meant that you couldn't get everything back.
  • No support for Index-Organized Tables, tables created on clusters, LONGs or LOBs: If you made use of these features, you can forget about using LogMiner to reconstruct the SQLREDO and SQLUNDO. It would have been nice to be able to do this though, especially if you are logging LOB changes for small LOB data such as employee photos or signatures.
  • DDL reconstructed as DML on system tables: I would have thought that knowing that a change was being made on system tables, it would have been easy to reconstruct the DDL; however, initially Oracle did not do this. This has, thankfully, been rectified in the current release, and a switch now allows DDL tracking to be enabled as DDL instead of DML on system tables.
  • Direct loads and parallel DML could not be reconstructed: Because of the way parallel DML and direct loads are logged (very minimally), there was no way to extract information on the data affected by these operations from the redo logs. In other words, do what you have been doing to ensure recoverability of these operations -- perform a backup.

So, what could you do with LogMiner in Oracle 8i? In my mind, LogMiner, even with its limitations, was very useful because it allowed a DBA TO get information on transaction frequency, workload, etc.. Even so, its real worth came in letting you perform better incomplete recovery. As any DBA knows, the worst case scenario for recovery is to back out unwanted changes to the database that can occur when a table is mistakenly dropped or an UPDATE or DELETE statement committed that should not have been. In this case, you need to determine the SCN of the transaction that performed the change and recover the entire database up to the transaction just before it. Figuring out the SCN of the offending transaction is easier said than done, and this is where LogMiner comes in.

You can use LogMiner to scan the redo logs for the time period close to the offending transaction and, by querying the SCN and SQL_REDO columns of the V$LOGMNR_CONTENTS view, decipher which was the offending transactions. With this knowledge you can issue a RECOVER DATABASE UNTIL CHANGE clause specifying the SCN of the transaction before the offending one and be relatively certain that everything before that disastrous event has been recovered and the offending change has not been. In the old days, you may have had to perform incomplete recovery several times to get the same level of recovery unless you incorporated your own auditing functionality that tracked the SCN of every change. With LogMiner, you no longer need to impose the overhead of auditing using triggers (or other means) on the database but still be able to get recoverability in these nasty cases.

LogMiner Enters Puberty
If the initial release of LogMiner can be considered as taking the first baby steps to a useful utility, with Oracle 9i Release 2 (version 9.2 or later), it would appear that LogMiner has entered puberty. It still is not as complete as it can be, but many of the restrictions from the initial release have been lifted. LogMiner now supports recovering changes to clustered tables, data in LOB or LONG columns, and chained and migrated rows

Furthermore, you can also reconstruct DDL statements and get information on changes made using parallel DML and direct load operations. Although still not being able to reconstruct SQLREDO and SQLUNDO on Index-Organized tables, the current release has made great strides in being able to get all of the data affected by the changes back -- it can now actually generate correct SQLREDO and SQLUNDO without losing portions of a row that may have been chained or in a LOB column.

But that's not all!! LogMiner in Oracle 9i Release 2 has also added some really handy capabilities, such as:

  • Ability to track and generate DDL statements during analysis
  • Expanding the information generated in SQLREDO and SQLUNDO to include primary keys and unique index values so that the SQLREDO can be applied to a different database if needed, to test recovery for example.
  • Capability to create the LogMiner dictionary in the database which is being analyzed (removes the need to set UTL_FILE_DIR properly) or in a flat file, as well as the ability to have the LogMiner dictionary file automatically updated as DDL is performed on the database.
  • LogMiner Viewer, which is a nice GUI front-end for viewing LogMiner data.

It Hasn't Grown Up Yet
With all the enhancements, some things still need to be done. First, many of the new features, such as DDL tracking, ability to recover chained rows and some others require that you enable supplemental logging on the database. Supplemental logging inserts additional information into the log files that can be used by LogMiner. This incurs some overhead if set at the database level, although it can be enabled only at the table level, which can cause inconsistent information to be presented when the logs are analyzed. Watch yourself here because database supplemental logging is on by default in Oracle 9i but off in Oracle 9i Release 2 (because of performance overhead).

Another issue -- one that cannot be resolved -- is that the features of LogMiner that are available depend on the version of the database being analyzed. For example, even though you can analyze Oracle 8i log files on an Oracle 9.2 database, because Oracle 8i does not support the same level of LogMiner functionality as Oracle 9.2, you are limited by what you can reconstruct during the analysis.

Perhaps the most vexing thing about LogMiner is that not all data types are supported yet. While Lobs, Longs and cluster table support has been added, other datatypes used with object relational features of Oracle are not. This means that if you make use of VARRAYs, nested tables, object REFs, or create your own datatype and use it in a table, you cannot recover these data elements with LogMiner.

LogMiner still has not reached the promise of being able to generate SQLREDO and SQLUNDO that can be used to reapply or remove changes to a database, but it is getting closer. Let's see if it makes it past these teenage years and into adulthood.

Questions? Comments? Post your thoughts below!


Damir Bersinic is an Infrastructure Consultant with Trecata Corporation, a systems integration consultancy in Toronto, Canada. He has more than 20 years of industry experience and has written a number of books on Oracle, Windows, SQL Server and Active Directory. His most recent book is "Portable DBA: SQL Server" from Osborne McGraw-Hill. He is also an Oracle Certified Professional (DBA), MCSE, MCDBA and MCT. He can be reached by e-mail at .

 


More articles by Damir Bersinic:

-- advertisement --


There are 7 CertCities.com user Comments for “LogMiner: From Birth to Puberty”
Page 1 of 1
12/30/02: arne motland from Stavanger, Norway says: Very good information! I've planning to use LogMiner to get a 'online copy' of our transaction database into a database special made for reporting. Reporting has in our system very bad influence for all the online applications so we want to make a copy and have to almost identical databases. The one we will use for reporting will be some minutes old, but that is good enough. We have study the expensive Quest-software SharePlex that do almost the same, but I'll think we can do the same with LogMinder. What do you think? Do you know about someone that are using LogMinder to get a complete copy database like we want to do? Sorry for long comments, Best regards, Arne! And happy new year to you!
7/5/13: gucci outlet store from [email protected] says: nice articles gucci outlet store http://www.guccioutletstore-online.com
7/25/13: Snapback Hats For Sale from [email protected] says: thanks for share! Snapback Hats For Sale http://www.discount-snapbackhats.com/
8/30/13: american football gear uk from [email protected] says: good articles american football gear uk http://www.americanfootballlshopuk.com
9/5/13: nfl custom jerseys from [email protected] says: thanks for share! nfl custom jerseys http://www.customnflljerseys.com
9/9/13: ugg boots sale uk from [email protected] says: thank you for share! ugg boots sale uk http://uggsbootssale.eu
10/1/13: fake ugg boots for sale from [email protected] says: nice articles fake ugg boots for sale http://fakebootsonsale.com
Your comment about: “LogMiner: From Birth to Puberty”
Name: (optional)
Location: (optional)
E-mail Address: (optional)
Comment:
   

-- advertisement (story continued below) --

top