Why DB_BIG_TABLE_CACHE_PERCENT_TARGET was a good solution in my case

During the preparation of my Oracle DBA 12c Certified Master exam, I learnt about the DB_BIG_TABLE_CACHE_PERCENT_TARGET parameter in 12c. This parameter allows you to reserve a certain percentage of the buffer cache for storing large tables. You know that full table scans on large tables are normally NOT stored in the buffer cache (they also use direct reads, bypassing the buffer cache), because it should not be the intention to keep scanning large tables over and over again.

My customer had one of those queries that are hard to tune. Is was a join of 4 large tables (8Gb, and 3 x 2Gb) with variations of where conditions (depending on the search criteria entered in the application). SQL Plan management etc is out of scope because each query was built dynamically and different from the others (so no fixed sql_id). The problem was that each of these table returned a lot of rows, and only after joining with the other table(s) and applying all the where conditions there as well, most rows were eliminated.

This makes HASH JOIN the best solution to join these tables together, but this means full table scans on all 4 tables (nested loops is not good when your first table returns a lot of rows for which you have to go to the other tables). Another solution could be to build a materialized view on the join of these tables, however in my case the queries require real-time data and the base tables are changing very much during the day. So a materialized view with refresh on commit would be no option; the permanent update of the materialized view would slow down the changes too much.

O yes, the problem query I was talking about was a central piece of the application, executed very often, and the customer wanted instant response. Hard to achieve on that 8+2+2+2 Gb dataset (bad idea, developers should think about this in advance…).

After discussion with the local DBA’s it turned out that the buffer cache could be increased from 20Gb to 35Gb. So we did and I configured DB_BIG_TABLE_CACHE_PERCENT_TARGET to 50 (%). So half of the buffer cache was available for large tables and the rest behaves like before. So the ‘other’ activity in the database should still have about 17,5Gb of buffer cache, as before, and should have any negative impact.

My problem query, after running it once, indeed stored my 4 large tables completely in the reserved part of the buffer cache. The first execution (disk reads) took 58 seconds, the second execution (no disk reads, all data came from cache) took 18 seconds (yes, there were still millions of rows to join together, so some cpu was expected).

Considering that these tables are constantly queried by similar, but varying queries, it was a good solution to keep all this data in memory. Also the view V$BT_SCAN_OBJ_TEMPS is useful to monitor this behaviour. Each table has a ‘temperature’ which is a measure for how often it was accessed by queries. This is a different algorithm than the traditional buffer cache for deciding which tables to keep in memory (i.e. when you have more large tables than provided space in the buffer cache). This is how it looks like:

BigTableCache

Final thoughts: usually it is a bad idea to solve performance problems by just ‘buying more memory’. You should first analyze the data model, rewrite the query or try to use other features to attack the root cause of the bad performance. In my case however it is a third party application with auto generated queries, so I didn’t have much options. Also the tables are queried very often so it makes sense to cache them in memory.

Advertisements

4 Responses to Why DB_BIG_TABLE_CACHE_PERCENT_TARGET was a good solution in my case

  1. Freek says:

    Wonder if assigning the table to a keep pool would also have worked

  2. martinberx says:

    KEEP pool is defined on object level.
    DB_BIG_TABLE_CACHE_PERCENT_TARGET depends on “temperature” and can not be defined per object.
    KEEP is more hand craft, whereas DB_BIG_TABLE_CACHE_PERCENT_TAGET is a kind of “auto”.

    • Freek D Hooge says:

      @martinberx
      Yes, but as I understand it here, the issue was with a select number (4) of tables.

      Using a keep pool would, I think, be a more specific solution.
      But then again, db_big_table_cache_percent_target could improve other queries as well (don’t know the specifics of this environment)

  3. Geert De Paep says:

    The keep cache might have worked as well in this case, however I think that the big table cache is much more advanced. The algorithm to decide which tables to store is more advanced (using the ‘temperature’ of objects), caching policies are more advanced (oracle may decide to cache a table full, partial or not al all), all space can be used for many objects (with keep cache, the space is only for the tables you mark as ‘keep’). So if you undersize the big table cache, it will be used in the most efficient way possible, if you oversize it, it can be used for other objects as well.

    The keep cache is definitely good for relative static objects that do not grow a lot, like lookup tables etc.

    What I definitely liked is that you can configure the big table cache without db restart (but this seems to be the case as well for the keep cache since 12c)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: