Auditing. Nice, but how to get rid of those audit records…

Nov 27, 2007

I was very surprised today. I was working on a system (prototype) that has the intention to enable auditing on several databases and to collect the results in a large table in a central database. The idea would be to use database links from the central database to the monitored databases in order to read out the aud$ table and delete the entries after they are copied to the local database. Now my testcase showed a really strange case:

select * from aud$ where sessionid = … and entryid=… and scn=…;

1 row selected

delete from aud$ where sessionid = … and entryid=… and scn=…; (same values)

0 rows deleted

Strange, isn’t it. However my initial search on Metalink gave me the answer immediately, in Note 388169.1: since 10.2 a security enhancement has been made which states that all DML on the AUD$ table should be audited as well, and that as a consequence only user SYS can succesfully do DML on the AUD$ table.

For security reasons I can really appreciate this, but for database management it is not so easy. This means that SYS access is required to clean up the audit trail. So if you had planned not to use SYS anymore and to try doing all by a ‘normal’ dba like system, you are stuck. Note that on a recent poll question on DBA-Village, 40% of the respondents said that they use user SYSTEM or a separate dba account for dba maintenance tasks.


The beauty of Oracle Streams

Nov 24, 2007

I’ve said it already many times and I keep repeating it: Oracle Streams is a wondeful feature. Consider the following case I encountered recently:

A large (+250Gb) and very critical database 10.1.0.4 on single-node non-rac server needs to be migrated to a 10.2 RAC system on ASM with minimal downtime. How would you do this?

Option 1: export/import? Too slow. Database contains 80% of CLOBs which would take ages to import.

Option 2: upgrade using dbua or catpatch. Will be faster but

  • Requires time to transfer the datafiles to the new system
  • Requires rman intervention to get the files in ASM + a lot of testing
  • Turns out that the database has nearly all options installed (olap, data mining, spatial), while on the new server we don’t want them, because we don’t pay for them. So this would mean that we would have to drop these options before starting the migration, but this would give us no chance to test this in advance. And when doing a highly critical migration, I want to have everything tested carefully.

Option 3: data pump? Over a database link? Will be faster than exp/imp, but might still take several hours. Especially when you know that old and new servers are in different data centers with a not-so-fast connection inbetween (don’t know the exact speed).

Option 4: logical standby as 10.2 for the 10.1.0.4 primary? This would allow a quick switchover at the moment of migration, but gives the same problem as the catpatch option: how to get rid of the installed options in a safe way when you use 10.2 software in which these options are not installed.

Option 5: Stream? Of course… Look at the benefits:

  • This option allows to have only the schema(s) replicated containing application data. So the problem of the spatial, data mining and olap options is solved
  • Streams can work between 10.1 and 10.2 databases
  • We start with an empty database created by dbca in ASM, so no stuff to do with rman to copy files from file system into ASM
  • Can be combined with datapump over a database link, so no intermediate temp space required during the migration. Note that this is an operation in advance, so the time for copying the data does not matter. Streams will easily catch up afterwards.
  • Just like logical standby, the new database is open and readable before the migration, so you can continuously check if all changes are received correctly before doing the final switch, and you have the guarantee in advance that the new database is valid and correct.

However there was one concern I had. Adding a capture process on the old database is an important change to do on a highly critical database. What if this capture process would cause too much overhead? Could it cause any kind of process or even instance failure on the old database? Is it safe to add this to a highly critical database?

Well, there is a great solution: downstream capture. The architecture is as follows: the current production sends its archivelogs to the new 10.2 database. This requires only adding an extra, optional, log_archive_dest_x, which is safe to do. The new database receives the archivelogs and runs the capture process, which enqueues it logical changes on a queue in this 10.2 database, i.e. all the sql done on the application tables. (yes, a 10.2 capture can read 10.1 archivelogs). A local apply process reads these changes from the queue and applies them in the 10.2 database. In this way, the new database is kept in sync with the old without any overhead on the old, i.e. a very safe operation in a critical environment. The final switch will consist of stopping all activity on the old database, switch logfile and verify that the last one is applied, and within seconds all is done.

I have to admit that there is a minimal impact on the primary. In order for streams to work best, it is recommended to have primary or unique keys on all tables (which we have, our developers are really good). Additionally supplemental logging must be enabled. This will cause to write the primary key colums in the redolog for every update, even if only non-pk columns are updated. However, my experience tells me that this is also a very safe change with minimal overhead (at least if PK or UN keys exist, otherwise all columns will be written in the redolog).

So to conclude, Oracle Streams with downstream capture allows to prepare the migration completely, guarantees that the new database is correct before starting the final migration, perform no risky changes to the old database, gives no issues to get files from file systen into ASM, requires no intermediate temp space and allows the final switch to be done in a few seconds. That’s what I call the beauty of Streams.


Ni hao

Nov 23, 2007

Welcome world to my blog. It took me some time, but finally I decided to start my own blog. There are so many things I do with Oracle that I think it is worth sharing them with you. So expect some interesting items here soon.