Creating a SQL Patch with Many Hints Requires a Hack
Striving for Optimal Performance Blog
by Christian Antognini
2y ago
In the past, when I created a SQL patch, I always specified a small number of hints. Last week, for the first time, I created one with more than 100 of them. Given their number, I didn’t want to specify them manually. Instead, my goal was to create a SQL patch that contained the outline associated with a cursor stored in shared pool. For that reason, I executed the following PL/SQL block: DECLARE l_sql_id VARCHAR2(13) := '2q7d290pp5vmp'; l_name dba_sql_patches.name%TYPE := 'TEST'; l_hints CLOB; BEGIN dbms_lob.createtemporary(lob_loc => l_hints, cache => TRUE); FOR i IN (SELECT ..read more
Visit website
Observations About the Scalability of Data Loads in Snowflake
Striving for Optimal Performance Blog
by Christian Antognini
3y ago
In the last weeks, I was running a number of tests based on the TPC-DS benchmark against Snowflake. One of the first thing I did is of course to create the TPC-DS schema and populate it. The aim of this blog post is to share some observations related to the population step. The data I loaded was the same I used for this blog post (i.e. it is a 1 TB TPC-DS schema). The only difference was that I had to split the input files. This was necessary because Snowflake cannot parallelize the load with a single file. Therefore, for optimal performance, I had to split the large input files (the largest e ..read more
Visit website
AWR: Multitenant-Specific Initialization Parameters
Striving for Optimal Performance Blog
by Christian Antognini
3y ago
By default, the database engine automatically takes snapshots in the root container only. Such snapshots cover the root container as well as all open PDBs belonging to it. From version 12.2 onward, you can control whether the database engine automatically takes also PDB-level snapshots through the dynamic initialization parameter AWR_PDB_AUTOFLUSH_ENABLED. In case you want to enable that feature, you have to carry out two operations: Set the initialization parameter AWR_PDB_AUTOFLUSH_ENABLED to TRUE (the default value is FALSE) either in a specific PDB or, if you want to enable it for all PDB ..read more
Visit website
AWR Flush Levels
Striving for Optimal Performance Blog
by Christian Antognini
3y ago
From version 12.1.0.2 onward, for taking AWR snapshots, you have the choice between four AWR flush levels: BESTFIT, LITE, TYPICAL and ALL. If you check the Oracle Database documentation, you won’t find much information about the difference between them. The best you will find, in the PL/SQL Packages and Types Reference, is the following: The flush level can be one of the following: BESTFIT: Uses the default value depending on the type of snapshot being taken. LITE: Lightweight snapshot. Only the most important statistics are collected. This is default for a pluggable database (PDB) and appli ..read more
Visit website
MIN/MAX Optimization and Asynchronous Global Index Maintenance
Striving for Optimal Performance Blog
by Christian Antognini
5y ago
In this short post I would like to point out a non-obvious issue that one of my customers recently hit. On the one hand, it’s a typical case where the query optimizer generates a different (suboptimal) execution plan even though nothing relevant (of course, at first sight only) was changed. On the other hand, in this case after some time the query optimizer automatically gets back to the original (optimal) execution plan. Let’s have a look at the issue with the help of a test case… The test case is based on a range partitioned table: CREATE TABLE t PARTITION BY RANGE (d) ( PARTITION t_q ..read more
Visit website
V$SQL_CS_HISTOGRAMS: What Are the Buckets’ Thresholds?
Striving for Optimal Performance Blog
by Christian Antognini
5y ago
The contents of the V$SQL_CS_HISTOGRAM view is used by the SQL engine to decide when a cursor is made bind aware, and therefore, when it should use adaptive cursor sharing. For each child cursor, the view shows three buckets. It is of general knowledge that the first one (BUCKET_ID equal 0) is associated with the executions that process up to and including 1,000 rows, the second one (BUCKET_ID equal 1) with the executions that processes between 1,001 and 1,000,000 rows, and the third one (BUCKET_ID equal 2) with the executions that processes more than 1,000,000 rows. The idea is that after an ..read more
Visit website

Follow Striving for Optimal Performance Blog on FeedSpot

Continue with Google
Continue with Apple
OR