Loading...
When you have a busy transaction processing system one of the things you try to avoid most is parsing, as it is a resource intensive and time consuming process. There are tons of tips on what you can do it …
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
In my previous life as the Optimizer Lady, I wrote a blog on the importance of gathering fixed object statistics, since they were not originally gathered as part of the automatic statistics gather task. Starting with Oracle Database 12c Release …
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

While at the HotSOS Symposium, last month, I caused quite a stir when I recommended that folks should never gather system statistics.

Why such a stir?

It turns out this goes against what we recommend in the Oracle SQL Tuning Guide, which says “Oracle recommends that you gather system statistics when a physical change occurs in the environment”.

So, who right?

Well in order to figure that out, I spoke with Mohamed Zait, the head of the optimizer development team and Nigel Bayliss, the product manager for the optimizer, upon my return to the office.

After our discussions, Nigel very kindly agreed to write a detailed blog post that explains exactly what system statistics are, how they influence the Optimizer, and provides clear guidance on when, if ever, you should gather system statistics!

What did I learn from all this?

Don’t gather system statistics unless you are in a pure data warehouse environment, with a good IO subsystem (e.g. Exadata) and you want to encourage the Optimizer to pick more full table scans and never says never!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I began my love affair with Docker a year ago when @GeraldVenzl got me started with my very first 12.2 Multitenant container database on Docker and I have to say I absolutely love the convenience of having an Oracle Database directly on my MAC for demos and building test cases to help answer AskTOM questions.

Then about six months ago I got an opportunity to beta test RAC on Docker when I needed a two node RAC cluster for a blog post on controlling where data is populated into In-Memory on a RAC cluster.

Now you have an opportunity to try RAC on Docker for yourself, as Oracle has just released Docker build files to create an Oracle RAC Database Docker image on Github.

I’m not going to lie to you, setting up the RAC Database Docker image is more complex than the single instance database version but if you need a RAC environment, it will be well worth it, especially because you can create a two node RAC cluster on a single host. A multi-host environment is also supported.

As well as the build files from Github, you also need to download several additional items in order to make this work, including:

  1. The Oracle RAC Storage Server Docker image to provide shared storage if you do not have block storage or a NAS device to store the RAC OCR/Voting files and Datafiles. For more details, see OracleRACStorageServer/README.md.
  2. Oracle Database 12c Release 2 Grid Infrastructure (12.2.0.1.0) for Linux x86-64
  3. Oracle Database 12c Release 2 (12.2.0.1.0) Enterprise Edition for Linux x86-64
  4. Patch# p27383741_122010_Linux-x86-64.zip , which you can download directly from Oracle Technology Network

The detailed steps to build and run Oracle RAC on  Docker can be found in in the OracleRealApplicationClusters/README.md.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
SQLMaria by Maria Colgan - 2M ago

Often times DBAs or application architects create views to conceal complex joins or aggregations in order to help simplify the SQL queries developers need to write.  However, as an application evolves, and the number of views grow, it can often be difficult for a developer to know which view to use.

It also become easier for a developer to write an apparently simple query, that results in an extremely complex SQL statement being sent to the database, which may execute unnecessary joins or aggregations.

The DBMS_UTILITY.EXPAND_SQL_TEXT procedure, introduced in Oracle Database 12.1, allows developers to expand references to views, by turning them into subqueries in the original statement, so you can see just exactly what tables or views are being accessed and what aggregations are being used.

Let’s imagine we have been asked to determine the how many “Flat Whites” we sold in our coffeeshops this month. As a developer, I know I need to access the SALES table to retrieve the necessary sales data and the PRODUCTS table to limit it to just our “Flat Whites” sales but I also know that the DBA has setup a ton of views to make developers lives easier. In order to determine what views I have access to, I’m going to query the dictionary table USER_VIEWS.

SELECT  view_name 
FROM    user_views
WHERE   view_name LIKE '%SALES%';
 
VIEW_NAME
-------------------------------
SALES_REPORTING2_V
SALES_REPORTING_V

Based on the list of views available to me, I would likely pick the view called SALES_REPORTING_V or SALES_REPORTING2_V but which would be better?

Let’s use the DBMS_UTILITY.EXPAND_SQL_TEXT procedure to find out. In order to see the underlying query for each view, we can use is a simple “SELECT *” query from each view. First, we will try ‘SELECT * FROM sales_reporting_v‘.

SET serveroutput ON
DECLARE
    l_clob CLOB;
BEGIN
    DBMS_UTILITY.Expand_sql_text(
    input_sql_text => 'SELECT * FROM SALES_REPORTING_V',
    output_sql_text => l_clob);
 
    DBMS_OUTPUT.Put_line(l_clob);
END;
/

The output from the procedure was

SELECT "A1"."ORDER_ID" "ORDER_ID",
       "A1"."TIME_ID" "TIME_ID",
       "A1"."C_NAME" "C_NAME",
       "A1"."PROD_NAME" "PROD_NAME",
       "A1"."AMOUNT_SOLD" "AMOUNT_SOLD"
FROM   (SELECT "A3"."ORDER_ID" "ORDER_ID",
               "A3"."TIME_ID" "TIME_ID",
               "A4"."C_NAME" "C_NAME",
               "A2"."PROD_NAME" "PROD_NAME",
               "A3"."AMOUNT_SOLD" "AMOUNT_SOLD"
        FROM   "COFFEESHOP"."CUSTOMERS" "A4",
               "COFFEESHOP"."SALES" "A3",
               "COFFEESHOP"."PRODUCTS" "A2"
        WHERE  "A4"."C_CUSTID"="A3"."CUST_ID"
        AND    "A2"."PROD_ID"="A3"."PROD_ID"
       )"A1"

In this case, the view (A1) does contain the columns I need (PRODUCT_NAME, TIME_ID and AMOUNT_SOLD). But if I used this view, I’d actually get a lot more data than I bargained for since it also joins to the CUSTOMERS table, which is not need for my query.

Let’s try ‘SELECT * FROM SALES_REPORTING2_V’.

SET serveroutput ON
DECLARE
    l_clob CLOB;
BEGIN
    DBMS_UTILITY.Expand_sql_text(
    input_sql_text => 'SELECT * FROM SALES_REPORTING2_V',
    output_sql_text => l_clob);
 
    DBMS_OUTPUT.Put_line(l_clob);
END;
/

The output from the procedure is

SELECT "A1"."ORDER_ID" "ORDER_ID",
        "A1"."TIME_ID" "TIME_ID",
        "A1"."PROD_NAME" "PROD_NAME",
        "A1"."AMOUNT_SOLD" "AMOUNT_SOLD" 
FROM  (SELECT "A3"."ORDER_ID" "ORDER_ID",
              "A3"."TIME_ID" "TIME_ID",
              "A2"."PROD_NAME" "PROD_NAME",
              "A3"."AMOUNT_SOLD" "AMOUNT_SOLD" 
       FROM   "COFFEESHOP"."SALES" "A3",
              "COFFEESHOP"."PRODUCTS" "A2" 
       WHERE  "A2"."PROD_ID"="A3"."PROD_ID"
      ) "A1"

From the output above, we see that this view contains all of the columns I need for my query but without any unnecessary tables. So, this is definitely the view I should use.

But what if your application uses synonyms to simplify view names for developers because the views are actually defined in some other schema. Will the DBMS_UTILITY.EXPAND_SQL_TEXT procedure determine a view definition if a synonym is used?

The answer is yes, but let’s take a look at an example to prove the point.

Let’s connect as a different user who sees the same views via synonyms and use the same set of steps as before.

CONNECT apps/******
 
Connected.
SELECT synonym_name, table_owner, table_name
FROM user_synonyms;
 
SYNONYM_NAME                    TABLE_OWNER   TABLE_NAME
------------------------------ ------------- ----------------------
SALES_CUSTOMERS_PRODUCTS_V      COFFEESHOP    SALES_REPORTING_V
SALES_PRODUCTS_V                COFFEESHOP    SALES_REPORTING2_V

Just as before, we have two views based off the original application schema views. Now lets run the DBMS_UTILITY.EXPAND_SQL_TEXT procedure on each of the synonyms. Let’s start with the synonym SALES_CUSTOMERS_PRODUCTS_V.

SET serveroutput ON
DECLARE
    l_clob CLOB;
BEGIN
    DBMS_UTILITY.Expand_sql_text(
    input_sql_text => 'SELECT * FROM SALES_CUSTOMERS_PRODUCTS_V',
    output_sql_text => l_clob);
 
    DBMS_OUTPUT.Put_line(l_clob);
END;
/
 
SELECT "A1"."ORDER_ID" "ORDER_ID",
       "A1"."TIME_ID" "TIME_ID",
       "A1"."C_NAME" "C_NAME",
       "A1"."PROD_NAME" "PROD_NAME",
       "A1"."AMOUNT_SOLD" "AMOUNT_SOLD"
FROM   (SELECT "A3"."ORDER_ID" "ORDER_ID",
               "A3"."TIME_ID" "TIME_ID",
               "A4"."C_NAME" "C_NAME",
               "A2"."PROD_NAME" "PROD_NAME",
               "A3"."AMOUNT_SOLD" "AMOUNT_SOLD"
        FROM   "COFFEESHOP"."CUSTOMERS" "A4",
               "COFFEESHOP"."SALES" "A3",
               "COFFEESHOP"."PRODUCTS" "A2" 
        WHERE "A4"."C_CUSTID"="A3"."CUST_ID"
        AND   "A2"."PROD_ID"="A3"."PROD_ID"
       ) "A1"                                                                                             
 
PL/SQL PROCEDURE successfully completed.

Just as before, we see that this view includes an extra table, CUSTOMERS. Let’s now try the synonym SALES_PRODUCTS_V.

SET serveroutput ON
DECLARE
    l_clob CLOB;
BEGIN
    DBMS_UTILITY.Expand_sql_text(
    input_sql_text => 'SELECT * FROM SALES_PRODUCTS_V',
    output_sql_text => l_clob);
 
    DBMS_OUTPUT.Put_line(l_clob);
END;
/
 
SELECT "A1"."ORDER_ID" "ORDER_ID",
       "A1"."TIME_ID" "TIME_ID",
       "A1"."PROD_NAME" "PROD_NAME",
       "A1"."AMOUNT_SOLD" "AMOUNT_SOLD"
FROM  (SELECT "A3"."ORDER_ID" "ORDER_ID",
              "A3"."TIME_ID" "TIME_ID",
              "A2"."PROD_NAME" "PROD_NAME",
              "A3"."AMOUNT_SOLD" "AMOUNT_SOLD"
       FROM   "COFFEESHOP"."SALES" "A3",
              "COFFEESHOP"."PRODUCTS" "A2"
       WHERE  "A2"."PROD_ID"="A3"."PROD_ID"
       ) "A1"
 
PL/SQL PROCEDURE successfully completed.

As you can see from the above output, the DBMS_UTILITY.EXPAND_SQL_TEXT procedure had no issue resolving the original view definition from the COFFEESHOP schema when the synonym name is used.

But what if you are a developer without access to the DBMS_UTILITY package, after all it’s not granted to PUBLIC by default?

Don’t panic.

If you are using SQL Developer version 4.1 against a 12c database than you can automatically see the expand definition of any view via a tool tip. Jeff Smith has already blogged about this at but here’s an example of our original.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Recently there has been a lot of interest or hope that Graphics processing units or GPUs would be able to transparently accelerate database workloads. So, I thought it was worth investigating what Oracle is up to regarding getting transparent performance gains from both CPUs and GPUs, as the Oracle Database has a long history of adopting new technologies as they become available.

Let’s start with GPUs.

It is important to understand the basic architectural benefits and tradeoffs of GPUs in order to determine whether they will provide  value for database workloads.

GPUs are dedicated highly parallel hardware accelerators that sit on the PCI bus. The huge number of parallel computation engines provided by these devices accelerate tasks that require large numbers of computations on small amounts of data.  For example, GPUs are extremely effective for Blockchain applications because these require billions of computations on a few megabytes of data.  GPUs are also good for deep learning algorithms since these perform repeated computational  loops on megabytes to gigabytes of data and of course GPUs are great for graphics because three-dimensional imaging requires millions of computations on every image.  The workload patterns here are all the same – lots of computation on modest amounts of data.

So, can GPUs improve database workloads?

Based on the description above it’s possible that GPUs could be used to accelerate Analytic workloads.  However, GPUs will have little or no benefit for OLTP style workloads.

GPUs offer the potential to accelerate analytic processing through two mechanisms:

  1. Adding a lot more parallel processing
  2. Using higher bandwidth, but much smaller specialized memory called High Bandwidth Memory (HBM).

However, database analytics don’t completely fit the GPU mold or sweet spot.

Analytics typically perform a small number of simple calculations on large amounts of data, often hundreds of gigabytes to petabytes of data.  For example, a typical analytic query will apply a simple predicate (e.g. filter sales region or date) and then perform a simple aggregation function (e.g. sum or average).

SELECT s.customer_name, SUM(s.amount_sold)
FROM    sales s
WHERE s.sales_region = 'CA'
GROUP BY s.customer_name;

It’s unlikely the volume of data processed by an analytics query will fit in the local GPU memory, therefore data will have to be moved back and forth across the PCI bus. This limits the total throughput to the PCI bus bandwidth which is dramatically lower than the local memory bandwidth.  This doesn’t mean that GPUs won’t provide any benefits for analytics, but users should not expect the dramatic benefits seen in other applications. It is just not architecturally possible.

All that said, Oracle, and other vendors, have found that some database analytics algorithms can in fact run faster on GPUs than using conventional processing methods.  However, care should be taken when reading performance comparisons showing huge advantages for GPUs. Typically, these comparisons contrast performance using traditional database algorithms vs new and highly optimized GPU algorithms.  Furthermore, these comparisons often use easily available but un-optimized and un-parallelized open-source databases that are orders of magnitude slower than commercial databases for analytics.

But it’s not all doom and gloom.  Changes in hardware are coming that will see PCI buses get faster, and future GPUs will reduce their PCI bus communication disadvantages by adding direct high bandwidth communication with the main CPUs.

So, what about today? Is there any hope to get transparent improvements in database performance fro CPUs?

The answer is yes!

Oracle Database 12c introduced a new columnar in-memory formats to greatly accelerate analytics.  The columnar in-memory algorithms make extensive use of SIMD vector instructions that are already present in standard CPUs today.

SIMD Vector instructions accelerate analytics by processing many data elements in a single instruction.  SIMD Vector instructions benefit from having full access to the very large caches and memory bandwidth that exist in current CPU sockets.  An advantage of SIMD vector instructions is that they are present in all existing CPUs and add no further cost, complexity, or power usage to existing hardware.

Oracle continues to rapidly add new SIMD vector algorithms to the database to take further advantage of these specialized instructions.   Oracle is also enhancing the parallel algorithms that execute SQL to take further advantage of SIMD instructions.  What’s really great about this is all performance gains are completely transparent to applications and require no effort from the customer other than installing the software.

Oracle has also been actively working with Intel and other chip vendors for many years to add additional SIMD vector instructions to CPUs for the specific purpose of accelerating Oracle Database algorithms. Some of these instructions are now becoming available, and more instructions will become available as new CPU chips are released in the next few years.

In summary, Oracle is actively improving its analytic algorithms by further leveraging SIMD Vector instructions and improving parallelism.  Oracle is working with both conventional CPU vendors and GPU vendors to add new hardware capabilities that specifically optimize database processing.  Current GPUs can be shown to run some analytic algorithms faster but achieving these advantages in a non-benchmark environment is challenging because these algorithms only work for a subset of analytic functions, and data needs to be moved back and forth across the PCI bus.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Believe it or not, it’s time to start thinking about Oracle OpenWorld 2018!

The Oracle OpenWorld 2018 call for papers is now opens! Oracle customers and partners are encouraged to submit proposals to present at this year’s Oracle OpenWorld conference, which will be held October 22-25, 2018 at the Moscone Center in San Francisco.

Details and submission guidelines are available on the Oracle OpenWorld Call for Papers web site. The deadline for submissions is Thursday, March 22, 11:59 p.m. PDT.

We look forward to checking out your sessions on the Oracle Database and how it has changed the way you do business!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
SQLMaria by Maria Colgan - 3M ago

Today Oracle officially released Oracle Database 18c on the Oracle Public Cloud and Oracle Engineered Systems. This is the first version of the database to follow the new yearly release model and you can find more details on the release model change in the Oracle Support Document 2285040.1 .

Before you freak out about the fact you haven’t even upgraded 12.2, so how on earth are you ever going to get to 18c – Don’t Panic!

Oracle Database 18c is in fact “Oracle Database 12c Release 2 12.2.0.2”, the name has simply been changed to reflect the year in which the product is released.

So, what can you expect?

As you’d imagine a patchset doesn’t contain any seismic changes in functionality but there are lots of small but extremely useful incremental improvements, most of which focus on the three key marquee features in Oracle Database 12c Release2:

More details on what has changed in each of these areas and other improvements can be found in the Oracle Database blog post published by Dominic Giles this morning or in the 18c documentation.

So, when will you be able to get your hands on 18c on-premises for non-engineered systems?

It will be some time later this calendar year, so stay tuned!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Over the years, Oracle has provided a number of techniques to help you control the execution plan for a SQL statement, such as Store Outlines and SQL Profiles but for me the only feature to truly give you plan stability is SQL Plan Management (SPM). It’s this true plan stability that has made me a big fan of SPM ever since it was introduced in Oracle Database 11g.

With SPM only known or accepted execution plans are used. That doesn’t mean Oracle won’t parse your SQL statements, it will. But before the execute plan generated at parse is used, we will confirm it is an accepted plan by comparing the PLAN_HASH_VALUE to that of the accepted plan. If they match, we go ahead and use that plan.

If they don’t match, that is to say, a new plan that is found. The new plan is tracked but not used. We use the information the SPM to reproduce the accepted plan. The new plan won’t be used until it has been proven to show a noticeable improvement in runtime.

So, how do I seed SPM with these “known” plans?

There are actually six different ways to populate plans into SPM:

  1. Automatic capture
  2. From a SQL Tuning Set
  3. From the cursor cache
  4. Unpacked from a staging table
  5. From existing stored outlines
  6. From the AWR repository (new to Oracle Database 12c Release 2)

In the past, I would  recommend you populate plans into SPM using options 2 through 5. I wouldn’t recommend automatic capture because it would result in a SQL plan baseline being created for every repeatable SQL statement executed on the system, including all monitoring and recursive SQL statements. On an extremely busy system this could potentially flood the SYSAUX tablespace with unnecessary SQL plan baselines.

But starting in Oracle Database 12c Release 2, it is now possible to limit which SQL statements are automatically captured using filters when you enable automatic plan capture. This enhancement now makes option 1 a very appealing approach especially if you have a system that is currently running well.

How does it work?

Before enabling automatic plan capture, you need to decide what SQL statements you want to capture SQL baseline plan for. Once you have an idea of what you want, you can use the DBMS_SPM.CONFIGURE procedure to set up filters that will control which SQL statements plans will be captured. You can filter on the following 4 things:

  1. Parsing Schema
  2. Action
  3. Module
  4. SQL_Text

For example, if you only wanted to capture plan from the COFFEESHOP schema you would use the following command:

BEGIN
  DBMS_SPM.CONFIGURE('AUTO_CAPTURE_PARSING_SCHEMA_NAME', 'COFEESHOP');
END;
/

Alternatively you can filter out a particular schema. For example if you don’t want to capture any plans from the HR schema you would use the following command:

BEGIN
  DBMS_SPM.CONFIGURE('AUTO_CAPTURE_PARSING_SCHEMA_NAME', 'HR', 'FLASE');
END;
/

Note: you can configure multiple automatic capture parameters of different types but you cannot specify multiple values for the same parameter. Instead, the values specified for a particular parameter are combined. So if I wanted to capture plans for the both the HR and the SH schemas you would use the following:

BEGIN
  DBMS_SPM.CONFIGURE('AUTO_CAPTURE_PARSING_SCHEMA_NAME', 'SH');
  DBMS_SPM.CONFIGURE('AUTO_CAPTURE_PARSING_SCHEMA_NAME', 'HR');
END;
/

Once your filters have been defined you can enable automatic plan capture by setting the init.ora parameter OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES to TRUE (default FALSE). When enabled, a SQL plan baseline will be automatically created for any repeatable SQL statement provided it doesn’t already have one based on your criteria.

Repeatable statements are SQL statements that are executed more than once during the capture period. To identify repeatable SQL statements, the optimizer logs the SQL signature, of each SQL statement executed the first time it is compiled in the SQL statement log (sqllog$).

In case you are not familiar with it, a SQL signature is a unique SQL identifier generated from the normalized SQL text (uncased and with whitespaces removed). Although similar to a SQL_ID, it’s not the same but it is the mechanism used by SQL profiles and SQL patches.

If the SQL statement is executed again, the presence of its signature in the statement log will signify it to be a repeatable statement. A SQL plan baseline is created for the repeatable statements that meet your filter criteria. A SQL plan baseline includes all of the information needed by the optimizer to reproduce the current cost-based execution plan for the statement, such as the SQL text, outline, bind variable values, and compilation environment. This initial plan will be automatically marked as accepted.

Let’s take a look at all of this in action to help clarify the steps.

-- Start by setting the desired filters. In this case we only want
-- to capture plans for queries executed in the SH and HR schemas
BEGIN
  DBMS_SPM.CONFIGURE('AUTO_CAPTURE_PARSING_SCHEMA_NAME', 'SH');
  DBMS_SPM.CONFIGURE('AUTO_CAPTURE_PARSING_SCHEMA_NAME', 'HR');
END;
/
PL/SQL PROCEDURE successfully completed.
 
-- Next we need to enable automatic plan capture
ALTER system SET optimizer_capture_sql_plan_baselines = TRUE;
 
System altered.
 
-- Now we can begin executing our workload
 
conn sh/sh
Connected.
 
SELECT /*LOAD_AUTO*/ *
FROM sh.sales
WHERE quantity_sold > 40
ORDER BY prod_id;
 
   PROD_ID    CUST_ID TIME_ID	C   PROMO_ID QUANTITY_SOLD AMOUNT_SOLD
---------- ---------- --------- - ---------- ------------- -----------
       185	29790 22-JUN-98 S	9999		44	  1716
       970	11320 11-DEC-99 P	9999		44	  1716
      1195     158960 11-SEP-00 S	  51		47	2918.7
      1240	43910 14-MAY-99 C	9999		46	  3634
 
conn hr/hr
Connected.
 
SELECT /*LOAD_AUTO*/ *
FROM   hr.regions;
 
 REGION_ID REGION_NAME
---------- -------------------------
	 1 Europe
	 2 Americas
	 3 Asia
	 4 Middle East AND Africa
 
conn oe/oe
Connected.
 
SELECT /*LOAD_AUTO*/ i.product_id, i.quantity
FROM  oe.orders o, oe.order_items i
WHERE o.order_id = i.order_id
AND   o.sales_rep_id = 160;
 
PRODUCT_ID   QUANTITY
---------- ----------
      2870	   10
      3106	  150
      3106	  110
 
-- As this is the first time we have seen these SQL statements, they are not yet
-- repeatable, so no SQL plan baseline have been created for them. In order to confirm this
-- we can check the view dba_sql_plan_baselines.
 
SQL> SELECT sql_handle, sql_text, plan_name,
  2  	    origin, enabled, accepted
  3  FROM dba_sql_plan_baselines
  4  WHERE sql_text LIKE 'select /*LOAD_AUTO*/%';
 
no rows selected
 
-- So, there are no baselines but if we check the statement log we do some SQL signatures were recorded
SQL> SELECT * FROM sys.sqllog$;
 
 SIGNATURE     BATCH#
---------- ----------
3.1614E+18	    1
8.0622E+18	    1
8.6816E+18	    1
 
--  So lets re-execute queries and check if the baselines were created after the second execution
 
conn sh/sh
Connected.
 
SELECT /*LOAD_AUTO*/ *
FROM sh.sales
WHERE quantity_sold > 40
ORDER BY prod_id;
 
   PROD_ID    CUST_ID TIME_ID	C   PROMO_ID QUANTITY_SOLD AMOUNT_SOLD
---------- ---------- --------- - ---------- ------------- -----------
       185	29790 22-JUN-98 S	9999		44	  1716
       970	11320 11-DEC-99 P	9999		44	  1716
      1195     158960 11-SEP-00 S	  51		47	2918.7
      1240	43910 14-MAY-99 C	9999		46	  3634
 
conn hr/hr
Connected.
 
SELECT /*LOAD_AUTO*/ *
FROM   hr.regions;
 
 REGION_ID REGION_NAME
---------- -------------------------
	 1 Europe
	 2 Americas
	 3 Asia
	 4 Middle East AND Africa
 
conn oe/oe
Connected.
 
SELECT /*LOAD_AUTO*/ i.product_id, i.quantity
FROM  oe.orders o, oe.order_items i
WHERE o.order_id = i.order_id
AND   o.sales_rep_id = 160;
 
PRODUCT_ID   QUANTITY
---------- ----------
      2870	   10
      3106	  150
      3106	  110
 
 SELECT sql_handle,sql_text, plan_name,
  2  	    origin, enabled, accepted
  3  FROM dba_sql_plan_baselines
  4  WHERE sql_text LIKE 'select /*LOAD_AUTO*/%';
 
SQL_HANDLE		       SQL_TEXT 	    PLAN_NAME		   ORIGIN	 ENA ACC
------------------------------ -------------------- ---------------------- ------------- --- ---
SQL_6fe28d438dfc352f	       SELECT /*LOAD_AUTO*/ SQL_PLAN_6zsnd8f6zsd9g AUTO-CAPTURE  YES YES
				*		    54bc8843
			       FROM sh.sales
			       WHERE quantity_sold
			       > 40
			       ORDER BY prod_id
 
SQL_787b46133c9b0064	       SELECT /*LOAD_AUTO*/ SQL_PLAN_7hyu62cy9q034 AUTO-CAPTURE  YES YES
				*		    36cb9897
			       FROM   hr.regions

As you can see from the example above, even though we had three repeatable SQL statements only two SQL plan baselines were created. The SQL statement executed in the OE schema did not have a SQL plan baseline automatically created for it because it was not one of the schema we told SPM that we wanted to automatically capture SQL plan baselines for.

By selecting only the schemas that we are really interested in, we can keep the number of SQL plan baselines to a reasonable amount, making it easier to manage them or move them between dev / test and production.

Finally if you want to remove any of the filters, you can simple set them to null.

BEGIN
  DBMS_SPM.CONFIGURE('AUTO_CAPTURE_PARSING_SCHEMA_NAME', '');
END;
/
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When it comes to SQL tuning we often need to look at the execution plan for a SQL statement to determine where the majority of the time is spent. But how we generate that execution plan can have a big impact on whether or not the plan we are looking at is really the plan that is used.

The two most common methods used to generate the execution plan for a SQL statement are:

EXPLAIN PLAN command – This displays an execution plan for a SQL statement without actually executing the statement.

V$SQL_PLAN A dynamic performance view introduced in Oracle 9i that shows the execution plan for a SQL statement that has been compiled into a cursor and stored in the cursor cache.

My preferred method is always to use V$SQL_PLAN (even though it requires the statement to at least begin executing) because under certain conditions the plan shown by the EXPLAIN PLAN command can be different from the plan that will actually be used when the query is executed.

So, what can cause the plans to differ?

Bind Variables

When a SQL statement contains bind variables, the plan shown using EXPLAIN PLAN is not aware of bind variable values while the plan shown in V$SQL_PLAN takes the bind variable values into account in the plan generation process. Let’s look at a simple example, using the customers tables, which has 1,018 rows and an index on the C_ZIPCODE column.

SELECT COUNT(*) 
FROM   customers;
 
  COUNT(*)
----------
      1018
 
SELECT   c_zipcode, COUNT(*) 
FROM     customers 
GROUP BY c_zipcode;
 
 C_ZIPCODE   COUNT(*)
---------- ----------
     20001	  290
      2111	   81
     10018	  180
     90034	  225
     94102	  225
     94065	   17
 
 
var n NUMBER;
exec :n :=94065;
 
PL/SQL PROCEDURE successfully completed.
 
SELECT COUNT(c_email) 
FROM   customers 
WHERE  c_zipcode=:n;
 
COUNT(C_EMAIL)
--------------
	    17
 
SELECT * 
FROM TABLE(DBMS_XPLAN.display_cursor(format=>'typical +peeked_binds'));
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------
SQL_ID	bjj643zga3mfu, child NUMBER 0
-------------------------------------
SELECT COUNT(c_email) FROM customers WHERE c_zipcode=:n
 
Plan hash VALUE: 4213764942
 
----------------------------------------------------------------------
| Id  | Operation		      | Name	    | Rows  |  Bytes| 
----------------------------------------------------------------------
|   0 | SELECT STATEMENT	      |		    |	    |       |	  
|   1 |  SORT AGGREGATE               |		    |	  1 |   24  |	
|   2 |   TABLE ACCESS BY INDEX ROWID | CUSTOMERS   |	 17 |       | 
|*  3 |    INDEX RANGE SCAN	      | IND_CUST_ZIP|	 17 |	    |	  
----------------------------------------------------------------------
 
Peeked Binds (identified BY position):
--------------------------------------
   1 - :N (NUMBER): 94065 
 
Predicate Information (identified BY operation id):
---------------------------------------------------
   3 - access("C_ZIPCODE"=:N)
 
20 rows selected.
 
EXPLAIN PLAN FOR 
SELECT COUNT(c_email) 
FROM customers 
WHERE c_zipcode=:n;
 
Explained.
 
SQL> 
SQL> SELECT * FROM TABLE(DBMS_XPLAN.display());
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------
Plan hash VALUE: 296924608
----------------------------------------------------------------
| Id  | Operation	   | Name      | Rows  | Bytes | Cost |
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |	       |     1 |    24 |  7   | 
|   1 |  SORT AGGREGATE    |	       |     1 |    24 |      |	  
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |   170 |  4080 |  7   | 
-----------------------------------------------------------------
 
Predicate Information (identified BY operation id):
---------------------------------------------------
   2 - filter("C_ZIPCODE"=TO_NUMBER(:N))

When we query the actual plan used at execution from V$SQL_PLAN via the DBMS_XPLAN.DISPLAY_CURSOR command we get an index access plan and the cardinality estimate (estimated number of rows returned) is accurate at 17 rows.

However when we use the EXPLAIN PLAN command for our statement, we get a full table scan plan and a cardinality estimate of 170 rows.

The first indication that the EXPLAIN PLAN command is not bind aware can be seen in the predicate information under the plan. There you will see the addition of a TO_NUMBER function to our bind variable :N, even though we declare the variable as a number.

Since no bind peeking occurs the optimizer can’t used the histogram on the c_zipcode column. Therefore the optimizer has to assume a uniform data distribution in the c_zipcode column and it calculates the cardinality estimate as NUM_ROWS / NDV or 1018/6 = 169.66, which rounded up is 170 rows.

Cursor_Sharing = FORCE

By setting the initialization parameter CURSOR_SHARING to FORCE, you are asking Oracle to replace the literal values in your SQL statements with system generated bind variables (commonly known as literal replacement). The intent of literal replacement is to reduce the number of cursors generated in the shared pool. In the best-case scenario, only one cursor will be built for all statements that only differ in the literal value used.

Let’s take our original example and replace our bind variable :N with the literal value 94065 and see what happens when CURSOR_SHARING is set to FORCE and we use the EXPLAIN PLAN command.

ALTER SYSTEM SET cursor_sharing = force;
 
System altered.
 
SELECT COUNT(c_email) 
FROM   customers 
WHERE  c_zipcode=94065;
 
COUNT(C_EMAIL)
--------------
	    17
 
SELECT * 
FROM TABLE(DBMS_XPLAN.display_cursor(format=>'typical +peeked_binds'));
 
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------
SQL_ID	djn0jckqvy2gk, child NUMBER 0
-------------------------------------
SELECT COUNT(c_email) FROM customers WHERE c_zipcode=:"SYS_B_0"
 
Plan hash VALUE: 4213764942
 
---------------------------------------------------------------------
| Id  | Operation		     | Name	     | Rows  |Bytes | 
---------------------------------------------------------------------
|   0 | SELECT STATEMENT	     |		    |	    |	    |	  
|   1 |  SORT AGGREGATE 	     |		    |	  1 |	 24 |	
|   2 |   TABLE ACCESS BY INDEX ROWID| CUSTOMERS    |	 17 |	408 |	  
|*  3 |    INDEX RANGE SCAN	     | IND_CUST_ZIP |	 17 |	    |	  
---------------------------------------------------------------------
 
Peeked Binds (identified BY position):
--------------------------------------
   1 - :SYS_B_0 (NUMBER): 94065
 
Predicate Information (identified BY operation id):
---------------------------------------------------
   3 - access("C_ZIPCODE"=:SYS_B_0)
 
25 rows selected.
 
EXPLAIN PLAN FOR 
SELECT COUNT(c_email) 
FROM customers 
WHERE c_zipcode=94065;
 
Explained.
 
SELECT * FROM TABLE(DBMS_XPLAN.display());
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
Plan hash VALUE: 4213764942
--------------------------------------------------------------------
| Id  | Operation		     | Name	     | Rows  |Bytes| 
--------------------------------------------------------------------
|   0 | SELECT STATEMENT	     |		    |	    |	   |	  
|   1 |  SORT AGGREGATE              |		    |	  1 |	 24|	
|   2 |   TABLE ACCESS BY INDEX ROWID| CUSTOMERS    |	 17 |	408|	  
|*  3 |    INDEX RANGE SCAN	     | IND_CUST_ZIP |	 17 |	   |	  
--------------------------------------------------------------------
Predicate Information (identified BY operation id):
---------------------------------------------------
   3 - access("C_ZIPCODE"=94065)

This time the plan is the same in both cases but if you look at the predicate information under both plans, you will notice that the explain plan command did not do the literal replacement. It still shows the predicate as C_ZIPCODE=94065 instead of C_ZIPCODE=:SYS_B_0.

So, why didn’t the explain plan command do the literal replacement?

The cursor generated by an EXPLAIN PLAN command is not shareable by design. Since the cursor isn’t shared there is no point in doing the literal replacement that would allow the cursor to be shared. Therefore the explain plan command does not replace the literals.

To demonstrate that the EXPLAIN PLAN command cursors are not shared, I ran our example queries two more times and then queried V$SQL.

SELECT sql_id, sql_text, executions, child_number
FROM   v$sql
WHERE  sql_text LIKE '%SELECT count(c_email)%';
 
SQL_ID        SQL_TEXT                               EXECUTIONS CHILD_NUMBER
------------- -------------------------------------- ---------- ------------
djn0jckqvy2gk SELECT COUNT(c_email) FROM customers            3            0
              WHERE c_zipcode=:"SYS_B_0"
 
78h277aadmkku EXPLAIN PLAN FOR SELECT COUNT(c_email)          1            0
               FROM customers WHERE c_zipcode=94065
 
78h277aadmkku EXPLAIN PLAN FOR SELECT COUNT(c_email)          1            1
               FROM customers WHERE c_zipcode=94065
 
78h277aadmkku EXPLAIN PLAN FOR SELECT COUNT(c_email)          1            2
 
4 rows selected.

You will notice that the actual query had it’s literal value replaced by the system generated bind :SYS_B_0 and only a single cursor (child_number 0) was generated, which was executed 3 times.

For the EXPLAIN PLAN version of the statement, no literal replace occurred and each execution created a new child cursor (0,1,2). Demonstrating that no cursor sharing occurs with the explain plan command.

So, what I have a few extra cursors. What’s the big deal?

The big deal is if you want to use any plan stability features like SQL plan baselines then you will not see the effect of these feature with EXPLAIN PLAN when CURSOR_SHARING is set to FORCE. Assuming you created the SQL plan baseline for the statement with the system generated bind :SYS_B_0 but then check which plan will be used with EXPLAIN PLAN not literal replace occurred, therefore there’s no corresponding baseline will be found for the statement. You can see an example of this in a recent AskTOM question I answered.

Adaptive Plans

In Oracle Database 12c Adaptive Plans enable the optimizer to defer the final plan decision for a statement, until execution time.

The optimizer instruments it’s chosen plan (the default plan), with statistics collectors so that at runtime, it can detect if its cardinality estimates, differ greatly from the actual number of rows seen by the operations in the plan. If there is a significant difference, then the plan or a portion of it can be automatically adapted to avoid suboptimal performance on the first execution of a SQL statement.

Currently only the join method or the parallel query distribution methods can adapt.

By default, the explain plan command will show only the initial or default plan chosen by the optimizer. Whereas the DBMS_XPLAN.DISPLAY_CURSOR function displays the final plan used by the query or the complete adaptive plan with the additional format parameter ‘+adaptive’.

Let’s look at a simple of example of a two table join that has an adaptive plans to understand the difference in what you will see between explain plan and DBMS_XPLAN.DISPLAY_CURSOR function.

EXPLAIN PLAN FOR
SELECT /*+ gather_plan_statistics*/ p.product_name
FROM   order_items2 o, product_information p
WHERE  o.unit_price = 15
AND    o.quantity > 1
AND    p.product_id = o.product_id;
 
Explained.
 
SELECT * FROM TABLE(DBMS_XPLAN.display());
 
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------
Plan hash VALUE: 983807676
-----------------------------------------------------------------------
| Id  | Operation		     | Name		      | Rows  | 
-----------------------------------------------------------------------
|   0 | SELECT STATEMENT	     |			      |     4 |   
|   1 |  NESTED LOOPS		     |			      |     4 |   
|   2 |   NESTED LOOPS		     |			      |     4 |   
|*  3 |    TABLE ACCESS FULL	     | ORDER_ITEMS2	      |     4 |    
|*  4 |    INDEX UNIQUE SCAN	     | PRODUCT_INFORMATION_PK |     1 |       
|   5 |   TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION    |     1 |    
-----------------------------------------------------------------------
 
Predicate Information (identified BY operation id):
---------------------------------------------------
 
   3 - filter("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1)
   4 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")
 
Note
-----
   - this IS an adaptive plan
 
22 rows selected.
 
SELECT /*+ gather_plan_statistics*/ p.product_name
FROM   order_items2 o, product_information p
WHERE  o.unit_price = 15
AND    o.quantity > 1
AND    p.product_id = o.product_id;
 
SELECT * FROM TABLE(DBMS_XPLAN.display_cursor());
 
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------
SQL_ID	d3mzkmzxn264d, child NUMBER 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ p.product_name FROM order_items2
o, product_information p WHERE o.unit_price = 15   AND o.quantity > 1
AND p.product_id = o.product_id
 
Plan hash VALUE: 2886494722
 
------------------------------------------------------------------
| Id  | Operation	   | Name		 | Rows  | Bytes | 
------------------------------------------------------------------
|   0 | SELECT STATEMENT   |			 |	 |	 |     
|*  1 |  HASH JOIN	   |			 |     4 |   128 |     
|*  2 |   TABLE ACCESS FULL| ORDER_ITEMS2	 |     4 |    48 |
|   3 |   TABLE ACCESS FULL| PRODUCT_INFORMATION |     1 |    20 |     
------------------------------------------------------------------
 
Predicate Information (identified BY operation id):
---------------------------------------------------
 
   1 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")
   2 - filter(("O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1))
 
Note
-----
   - this IS an adaptive plan
 
 
27 rows selected.

As you can see the initial plan the optimizer came up with was a NESTED LOOP join, when the final plan was in fact a HASH JOIN. If you only use the EXPLAIN PLAN command you would never know a completely different join method was used.

So, my advice is to use use V$SQL_PLAN when reviewing the execution plan for a SQL statement, as it will also show the play actually used by the statement.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview