Loading...
I’m a huge fan of the QB_NAME hint.  It allows you to name your query blocks and can help with debugging.  Your queries wouldn’t run faster or slower with this hint but it certainly helps when you have many subqueries and/or large UNION queries.  Also I’m an old dog so I still write my joins in the WHERE clause not the way new hip way of writing them in the FROM clause.  Because of this I didn’t know of a problem with QB_NAME until just this past week.

When using the QB_NAME hint and the new FROM clause join (often called ANSI joins), the QB_NAME is not used.  Below I have an example that illustrates this issue.  The query is an outer join between two tables.  Oracle does a rewrite and make an in line view of the count on the big_tab table and then joins it to the allusers_tab table. 

Of interest is that in the “classic syntax” even though the QB_NAME is mauled by the rewrite; it does survive to be used as the table alias, look below the plans in the Query Block Name / Object Alias section.  However when the join is in the FROM (the newer ANSI syntax) the query is also rewritten but this time the QB_NAME is completely gone.

I did a 10053 trace (a trace of a hard parse) on the ANSI join query.  At the top of the trace the query block signature gets changed.  The QB_NAME starts as:
signature (): qb_name=MAIN nbfros=1 flg=0

Then right after is this line:
signature (): qb_name=SEL$1 nbfros=2 flg=0

I’m not sure what nbfros is but I’m guessing that this is some sort of “level” and that at a 2 this is a rewrite of the query or some such, and it takes precedence over the 1.

Also from the 10053 trace I found it interestingly that the join was actually converted back into the classic syntax.  I saw this in the trace:
Stmt: ******* UNPARSED QUERY IS *******
SELECT /*+ QB_NAME ("MAIN") */ "A"."USERNAME" "USERNAME",COUNT("B"."OWNER") "CNT" FROM "OP"."ALLUSERS_TAB" "A","OP"."BIG_TAB" "B" WHERE "A"."USERNAME"="B"."OWNER"(+) GROUP BY "A"."USERNAME" ORDER BY "A"."USERNAME"

However this is not the query that is parsed, the one below is the one that was parsed.  Notice it still has the classic outer join syntax in it and it has the QB_NAME hint, but it's effectively ignored.

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT /*+ QB_NAME ("MAIN") */ "A"."USERNAME" "USERNAME",TO_NUMBER(TO_CHAR(NVL("VW_GBC_5"."ITEM_2",0))) "CNT" FROM  (SELECT "B"."OWNER" "ITEM_1",COUNT("B"."OWNER") "ITEM_2" FROM "OP"."BIG_TAB" "B" GROUP BY "B"."OWNER") "VW_GBC_5","OP"."ALLUSERS_TAB" "A" WHERE "A"."USERNAME"="VW_GBC_5"."ITEM_1"(+) ORDER BY "A"."USERNAME"

 

OK now here is the test, first the code I used for my test.  The file name is ansi_join_test1.sql

set serveroutput off

select /*+ qb_name(MAIN) */ a.username, count(owner) cnt
from allusers_tab a, big_tab b
where a.username = b.owner(+)
group by a.username
order by a.username
/

SELECT PLAN_TABLE_OUTPUT
FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'TYPICAL IOSTATS LAST ALIAS'))
/

select /*+ qb_name(MAIN) */ a.username, count(owner) cnt
from allusers_tab a left outer join big_tab b on a.username = b.owner
group by a.username
order by a.username
/

SELECT PLAN_TABLE_OUTPUT
FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'TYPICAL IOSTATS LAST ALIAS'))
/


The sample run -------------------------------------------------------------------------------------------

SQL> @ansi_join_test1

USERNAME                                             CNT
---------------------------------------- ---------------
ANONYMOUS                                              0
APEX_040200                                        96672

There are 48 total rows returned, delete most to conserve space. 

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------

SQL_ID  0yy8murg5phnw, child number 0
-------------------------------------
select /*+ qb_name(MAIN) */ a.username, count(owner) cnt from
allusers_tab a, big_tab b where a.username = b.owner(+) group by
a.username order by a.username

Plan hash value: 2416054041

-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name        | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |             |      1 |        |       | 11755 (100)|          |     48 |00:00:00.34 |   42681 |
|   1 |  MERGE JOIN OUTER     |             |      1 |     48 |  1344 | 11755   (2)| 00:00:01 |     48 |00:00:00.34 |   42681 |
|   2 |   INDEX FULL SCAN     | USERNAME_PK |      1 |     48 |   432 |     1   (0)| 00:00:01 |     48 |00:00:00.01 |       5 |
|*  3 |   SORT JOIN           |             |     48 |     35 |   665 | 11754   (2)| 00:00:01 |     34 |00:00:00.34 |   42676 |
|   4 |    VIEW               | VW_GBC_5    |      1 |     35 |   665 | 11753   (2)| 00:00:01 |     35 |00:00:00.34 |   42676 |
|   5 |     HASH GROUP BY     |             |      1 |     35 |   210 | 11753   (2)| 00:00:01 |     35 |00:00:00.34 |   42676 |
|   6 |      TABLE ACCESS FULL| BIG_TAB     |      1 |   2422K|    13M| 11657   (1)| 00:00:01 |   2422K|00:00:00.16 |   42676 |
-------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$85E9EE9D
   2 - SEL$85E9EE9D / A@MAIN                         Notice MAIN is still here
   4 - SEL$E0E6E493 / VW_GBC_5@SEL$ED2A7381
   5 - SEL$E0E6E493
   6 - SEL$E0E6E493 / B@MAIN

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("A"."USERNAME"="ITEM_1")
       filter("A"."USERNAME"="ITEM_1")


USERNAME                                             CNT
---------------------------------------- ---------------
ANONYMOUS                                              0
APEX_040200                                        96672

There are 48 total rows returned, delete most to conserve space. 

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------


SQL_ID  b9bvggwsyx6uy, child number 0
-------------------------------------
select /*+ qb_name(MAIN) */ a.username, count(owner) cnt from
allusers_tab a left outer join big_tab b on a.username = b.owner group
by a.username order by a.username

Plan hash value: 2416054041

-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name        | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |             |      1 |        |       | 11755 (100)|          |     48 |00:00:00.33 |   42681 |
|   1 |  MERGE JOIN OUTER     |             |      1 |     48 |  1344 | 11755   (2)| 00:00:01 |     48 |00:00:00.33 |   42681 |
|   2 |   INDEX FULL SCAN     | USERNAME_PK |      1 |     48 |   432 |     1   (0)| 00:00:01 |     48 |00:00:00.01 |       5 |
|*  3 |   SORT JOIN           |             |     48 |     35 |   665 | 11754   (2)| 00:00:01 |     34 |00:00:00.33 |   42676 |
|   4 |    VIEW               | VW_GBC_5    |      1 |     35 |   665 | 11753   (2)| 00:00:01 |     35 |00:00:00.33 |   42676 |
|   5 |     HASH GROUP BY     |             |      1 |     35 |   210 | 11753   (2)| 00:00:01 |     35 |00:00:00.33 |   42676 |
|   6 |      TABLE ACCESS FULL| BIG_TAB     |      1 |   2422K|    13M| 11657   (1)| 00:00:01 |   2422K|00:00:00.15 |   42676 |
-------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$6A6A6CC9
   2 - SEL$6A6A6CC9 / A@SEL$1                         Notice that MAIN is gone.
   4 - SEL$8D772734 / VW_GBC_5@SEL$920000A1
   5 - SEL$8D772734
   6 - SEL$8D772734 / B@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("A"."USERNAME"="ITEM_1")
       filter("A"."USERNAME"="ITEM_1")

SQL>

 To get the 10053 trace I used:
exec dbms_sqldiag.dump_trace(p_sql_id=>'b9bvggwsyx6uy',  p_child_number=>0, p_component=>'Optimizer');

To create the tables used I used this:

drop table big_tab ;
drop table allusers_tab ;

create table big_tab as select * from all_objects;
insert /*+ append */ into big_tab select * from big_tab;
commit;
insert /*+ append */ into big_tab select * from big_tab;
commit;
insert /*+ append */ into big_tab select * from big_tab;
commit;
insert /*+ append */ into big_tab select * from big_tab;
commit;
insert /*+ append */ into big_tab select * from big_tab;

create index big_idx on big_tab(object_id);
create index big_objtype_idx on big_tab(object_type);

create table allusers_tab as select * from all_users ;

alter table allusers_tab
add constraint username_pk
primary key (username) ;

EXEC DBMS_STATS.SET_TABLE_PREFS('OP','BIG_TAB','METHOD_OPT','FOR ALL COLUMNS SIZE REPEAT');
EXEC DBMS_STATS.SET_TABLE_PREFS('OP','ALLUSERS_TAB','METHOD_OPT','FOR ALL COLUMNS SIZE REPEAT');

EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'BIG_TAB',CASCADE=>TRUE,ESTIMATE_PERCENT=>100,METHOD_OPT=>'FOR ALL COLUMNS SIZE 1')
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER,'ALLUSERS_TAB',CASCADE=>TRUE,ESTIMATE_PERCENT=>100,METHOD_OPT=>'FOR ALL COLUMNS SIZE 1')
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

So you want to update that table in parallel eh?  Up until 12.1.01 you had to first alter your session to get Parallel DML, and then issue the statement.  Then turn it off if you didn’t want other DML statements to be done in Parallel.   Not a huge deal of course, but now there is an easier way.

(BTW – Here DML, Data Manipulation Langue, refers only to insert, update, and delete.  Technically a select statement is DML, but not in this particular case.)  

So a little example here, first I check to see if my session has parallel DML enabled.  Then I run an update with the ENABLE_PARALLEL_DML  hint and check the plan.   The database version used in this example is 12.2.0.1.0.

SQL> select PDML_STATUS from v$session WHERE audsid = userenv('sessionid')
  2  /

PDML_STA
--------
DISABLED
SQL>
SQL> update /*+ ENABLE_PARALLEL_DML parallel */
  2  big_tab set EDITION_NAME = 'BOB'
  3  where owner != 'SYS';
SQL>
SQL> SELECT PLAN_TABLE_OUTPUT
  2  FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR
  3  ('&psqlid','&PCHILDNO',FORMAT=>'typical allstats last alias'))
  4  /

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------

SQL_ID  1qrndwafn5vrc, child number 5
-------------------------------------
update /*+ ENABLE_PARALLEL_DML parallel */ big_tab set EDITION_NAME =
'BOB' where owner != 'SYS'

Plan hash value: 2335173333

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT         |          |      1 |        |       |  1622 (100)|          |        |      |            |
|   1 |  PX COORDINATOR          |          |      1 |        |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)    | :TQ10001 |      0 |   2353K|   161M|  1622   (1)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    INDEX MAINTENANCE     | BIG_TAB  |      0 |        |       |            |          |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE           |          |      0 |   2353K|   161M|  1622   (1)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND RANGE       | :TQ10000 |      0 |   2353K|   161M|  1622   (1)| 00:00:01 |  Q1,00 | P->P | RANGE      |
|   6 |       UPDATE             | BIG_TAB  |      0 |        |       |            |          |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR |          |      0 |   2353K|   161M|  1622   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|*  8 |         TABLE ACCESS FULL| BIG_TAB  |      0 |   2353K|   161M|  1622   (1)| 00:00:01 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - UPD$1
   8 - UPD$1 / BIG_TAB@UPD$1

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access(:Z>=:Z AND :Z<=:Z)
       filter("OWNER"<>'SYS')

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 8 because of degree limit

And boom!  It’s a parallel update!   Rock-n-Roll.  




If I didn’t use the ENABLE_PARALLEL_DML hint but still had the parallel hint, the statement's plan looks like what is below.  Notice that the statement is running “in parallel” but only to find the rows, then the update is applied serially.  Seriously.  Also there is a note now telling me that parallel DML is not enabled.

SQL_ID  8xyhtpc76rwfq, child number 1
-------------------------------------
update /*+ parallel */ big_tab set EDITION_NAME = 'BOB' where owner !=
'SYS'

Plan hash value: 3425284328

-------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Starts | E-Rows |E-Bytes| E-Time   |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |          |      3 |        |       |          |        |      |            |
|   1 |  UPDATE               | BIG_TAB  |      3 |        |       |          |        |      |            |
|   2 |   PX COORDINATOR      |          |      3 |        |       |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)| :TQ10000 |      0 |   2353K|   161M| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR |          |      0 |   2353K|   161M| 00:00:01 |  Q1,00 | PCWC |            |
|*  5 |      TABLE ACCESS FULL| BIG_TAB  |      0 |   2353K|   161M| 00:00:01 |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access(:Z>=:Z AND :Z<=:Z)
       filter("OWNER"<>'SYS')

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2
   - PDML is disabled in current session
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Is a Cartesian a join method or a join type?  Turns out that most of us have been thinking that MERGE JOIN CARTESIAN was a join method but it’s not.  And even the oracle documentation has it wrong.  This is one of the small morsels of knowledge that came out of Maria Colgan’straining day at the Hotsos Symposium (#hotsym18) this year. 

Most of the time we see MERGE JOIN CARTESIAN in the plan but you can also have a NESTED LOOPS method as a Cartesian as well.  The problem is that the Cartesian option doesn’t show up for the NESTED LOOPSplan.  So it’s not as obvious that it’s a Cartesian results being produced. 

A HASH join cannot be a Cartesian; it requires an equijoin and hence can’t be used to create a Cartesian result. 

So here’s two simple plans showing the good old sort merge and nested loops producing a Cartesian result.




This plan is producing 739,584 rows from my 27,392 employees table and 27 row departments table, which is 27,392 multiplied by 27.  Unlikely this is really the results I want.



Here is the same Cartesian result being produced by a nested loop join method.  Notice a few things:

  • Notice the options is blank (it’s the OPTI… column here)
  • There is no join predicate (same as sort merge plan)
  • LIOs go from 407 to 13,768
  • Elapsed time goes from 155,474ms to 200,157ms

Clearly the sort merge method did a lot less work then the nested loops method.  This means we will likely see these more than the nested loop.  And if you do get the nested loops for a method on a Cartesian it’s not as obvious.   Which means you may have some nested loops in your plans producing Cartesian results and not know it.

Maria has logged a documentation bug on this and hopefully we’ll see the documentation correctly identifying this in the future.


Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


At Hotsos we are all about performance.  And at the heart most performance problems is a select  statement more often than not.   In the past my go-to tool to do performance optimization and training has been SQL*Plus.  Of late I’m moving more and more to SQL Developer. Because it has changed from a tool that was so-so several years ago to a powerhouse of features and functionality. 


In this short bit I just want to talk about the autotrace feature in particular and how to use it for performance optimization of a select.   Some key things to focus on when optimizing a query is logical IOs (LIOs), Starts and of course time.  And pretty much in that order.  Time is of course the goal, make it run fast. How to make it faster? Make sure it does less work.  How do you measure work of a query?  LIOs.   Now I take a short look at how to use autotrace to do this.   I’m using the 17.4 drop of SQL Developer.

This is a query that returns some customer information and the last date they placed an order. The code looks like this:



Just a quick tip, avoiding distinct when not needed. It does invoke extra work so make sure it’s really necessary, here the code needs it because a customer could place more than one order per day.  Using autotrace in SQL Developer we can see this for the plan:



The column headings that are cut off are: LAST_STARTS, LAST_OUTPUT_ROWS, CARDINALITY, LAST_CR_BUFFER_GETS, LAST_ELAPSED_TIME and QBLOCK_NAME.  With these columns you have really good idea what is happening with your query. 


Looking the top line you can see the query did 267,478 LIOs (LAST_CR_BUFFER_GETS) to get the result set of 1000 rows (LAST_OUTPUT_ROWS).  That’s and average of 267 LIOs per row returned.  That seems high.  Maybe it’s OK, but one other thing gives me a strong hint that isn’t ideal.  Notice that the query block named MAX_BLOCK is executing 1,310 times.  And this repeated execution is driving the vast majority of the LIOs.  This part of the plan also takes the longest which is often true, but not always.

Rewriting the code to use an analytic to find the max date would look like this:




Using autotrace on this code, this is the plan and statistics:




Wow!  That’s great.  Notice the starts is 1 all the way up and down the plan, the total LIOs is down to an almost unbelievable 238 from 267,478.  And the time of course dropped to .22 seconds from 1.7 seconds. 


A nice thing to do in SQL Developer is put both code blocks in one editor window.  Then do autotrace on both and it’s easy to compare the runs.   Here I’ve done that and after each autotrace finished I did a right click on the tab and renamed each.  Then do a right click on either one lets you compare it to the other (I right clicked on SubQ to get what is shown below).


Below the plan is a boat load of stats collect by the autotrace and doing the compare you can see how these have changed.  Ones that are different are in red, and nearly all of them are in red here. (There are more below this screen shot and some are the same.)  The consistent gets (yet another name for LIOs) dropped way way down.




SQL Developer doesn’t really do anything that you can’t do in something like SQL*Plus.  For example at Hotsos we have a free tool called the Harness that collects all this same information in SQL*Plus.  SQL Developer pulls all this information together for you in a very easy to navigate presentation. This makes it a snap to read and analyze the problem which then helps you make the right decisions. If you're not using SQL Developer you really need to start. 

Also for great info on installing, configuring and using SQL Developer, hop over to Jeff Smith's Blog.  He's the main man for SQL Developer, he with his outstanding team have made it a powerful and indispensable part of SQL development. 
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


Computers get faster, massive parallel processing is now standard, spinning disks are disappearing, and gigabit networks are common.  So with all this improvement in hardware speed do we need to optimizer our code anymore?  

If you’ve been in the industry for a while you’ve seen this pattern before.  Hardware gets faster and cheaper and what was a perceived problem before now seems to be solved with faster better machines.  And that is true, if we stayed with the same software and data load of times gone by.  But we don’t.

A good example of this is just data volume.  Many years ago, a 20Megabyte hard drive was considered big.  (My first hard drive was a big as a bread box and was 5 Megabytes, yes Five.)   Newer faster smaller drives came out.  Then it was easy to store 100s of Megabytes, then Gigabytes and now Terabytes are common place. What did this do? In the past a company might only store a year or so of data online as they couldn’t’ afford the storage for more.  Now they can easily store 10 or more years of data.

What did this do? It demanded more of the computer to process all that data, faster CPUs and more of them were needed.  And code had to be more efficient.  In the Oracle world I work in this drove for different ways to store data, partitioning for example.  But the code also needed to access the data the right way.  The right index and predicate could mean the difference between get a response in a reasonable amount of time, or never having it complete.

Today were seeing this all over again.  We have “big data” now, and we want to process this data in a blink of an eye to slice and dice sales data to get that marketing edge on the competition.  Or make that decision on what to invest in, or a thousand other questions.  We will continue to ask for more and want it faster from our data as we accumulate more data.

All the things I mentioned at the start give us even the chance to make this happen.  What it doesn’t mean is that we can write sloppy code and just hope that the hardware will be fast enough to make up for our suboptimal code.

It’s not good enough that code get the right results; it still has to do that efficiently.  The right indexes still matter, the right predicates still matter and the right statistics for the optimizer still matter.  In Oracle land the database engine can do a lot of amazing things.  But it still isn’t as good as we are, we can write better code and we all should strive for that in every piece of code.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
When doing some work the more efficiently we do it the better.  This is true in real life and with our queries we write in SQL.  Imagine you had a landscape supply company deliver a load of bricks you will use for a patio.  The problem is they are sitting in your drive way and the patio is in the back yard.  You could move them one at a time from the driveway to the back of the house.  Or you could get a wheel barrel and move them several at a time.  Which way do you think you’ll move the bricks faster?

The same can happen with our queries.  And the STARTS column in the run time stats of the plan can help.  You will see this stat using the format option IOSTATS or ALLSTATS.  You can also see it in the STAT lines starting in 12.2 in the 10046 trace files.

You need information from a table within the query.  Certainly a natural way to do this is to write a subquery to get it.  For example I want a report of customers, the number of orders and the average order amount.  I could write it like this (the tables I’m using here are nonsensical but do the same thing):

select /*+ qb_name(mainblk) gather_plan_statistics */ a.username customer_name,
 (select /*+ qb_name(cntblk) */ nvl(count(*),0) from scalar_allobjects b where b.owner = a.username) cnt_orders,
 (select /*+ qb_name(avgblk) */ nvl(round(avg(object_id),2),0) from scalar_allobjects b where b.owner = a.username) avg_order_amt
from scalar_allusers a
order by a.username
/

When it runs, the run time stats look like this, notice the STARTS column in particular.

--------------------------------------------------------------------------------------------------
| Id  | Operation          | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |      1 |        |     48 |00:00:00.55 |     128K|
|   1 |  SORT AGGREGATE    |                   |     48 |      1 |     48 |00:00:00.27 |   64320 |
|*  2 |   TABLE ACCESS FULL| SCALAR_ALLOBJECTS |     48 |   2167 |  63383 |00:00:00.28 |   64320 |
|   3 |  SORT AGGREGATE    |                   |     48 |      1 |     48 |00:00:00.28 |   64320 |
|*  4 |   TABLE ACCESS FULL| SCALAR_ALLOBJECTS |     48 |   2167 |  63383 |00:00:00.30 |   64320 |
|   5 |  SORT ORDER BY     |                   |      1 |     48 |     48 |00:00:00.55 |     128K|
|   6 |   TABLE ACCESS FULL| SCALAR_ALLUSERS   |      1 |     48 |     48 |00:00:00.01 |       9 |
--------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - CNTBLK
   2 - CNTBLK  / B@CNTBLK
   3 - AVGBLK
   4 - AVGBLK  / B@AVGBLK
   5 - MAINBLK
   6 - MAINBLK / A@MAINBLK

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("B"."OWNER"=:B1)
   4 - filter("B"."OWNER"=:B1)

So it’s doing some 48 full table scans for each sub query, on the same table.  That’s 96 full table scans on that table alone.  An index might help, but really the problem is that the subqueries are running way to many times. How about a join of the two tables instead? It has to be an outer join, because there are some customers who have not placed any orders yet so they wouldn’t show up at all with an inner join.  Also we can’t do a COUNT(*) because that counts the occurrence of a row, and with an outer join we’d get a row even where there isn’t a match.  So the query would look like this:

select /*+ qb_name(mainblk) gather_plan_statistics */ a.username customer_name,
 nvl(count(b.owner),0) cnt_orders,
 nvl(round(avg(b.object_id),2),0) avg_order_amt
from scalar_allusers a, scalar_allobjects b
where b.owner(+) = a.username
group by a.username
order by a.username
/

When this on runs its stats look like this:

---------------------------------------------------------------------------------------------------
| Id  | Operation           | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                   |      1 |        |     48 |00:00:00.02 |    1349 |
|   1 |  SORT GROUP BY      |                   |      1 |     35 |     48 |00:00:00.02 |    1349 |
|*  2 |   HASH JOIN OUTER   |                   |      1 |  75842 |  63397 |00:00:00.01 |    1349 |
|   3 |    TABLE ACCESS FULL| SCALAR_ALLUSERS   |      1 |     48 |     48 |00:00:00.01 |       9 |
|   4 |    TABLE ACCESS FULL| SCALAR_ALLOBJECTS |      1 |  75829 |  75829 |00:00:00.01 |    1340 |
---------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - MAINBLK
   3 - MAINBLK / A@MAINBLK
   4 - MAINBLK / B@MAINBLK

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("B"."OWNER"="A"."USERNAME")

Now there is only one full table scan on each table.  Notice the starts column this time.  Also it does a significantly lower amount of LIOs (the buffers column), the first one doing over 128,000 and the second one at 1,349.  And to boot, it’s over ten times faster, no surprise when it’s doing about 1% of the LIOs.  I did run this test many times and it was always over ten times faster. 
 

The STARTS column is a great way to see if there is some part of the plan that is running excessively.  If you see this there are 3 basic attack patterns: turn it into a join (as done here), use the WITH clause and turn it into a subquery factor and join that into the main query, or put it into the FROM clause as an inline view which is joined to the other tables.  


All these have the same net effect, join the table into the query don’t select from it as a subquery that is run repeatedly.   More often than not this is going to do less work and hence will be faster. 


Here is the full script I used for this example:

set serveroutput off
column customer_name format a30
column cnt_orders    format 99,999,999
column avg_order_amt format 99,999,999.99

set termout off
select /*+ qb_name(mainblk) gather_plan_statistics */ a.username customer_name,
 (select /*+ qb_name(cntblk) */ nvl(count(*),0) from scalar_allobjects b where b.owner = a.username) cnt_orders,
 (select /*+ qb_name(avgblk) */ nvl(round(avg(object_id),2),0) from scalar_allobjects b where b.owner = a.username) avg_order_amt
from scalar_allusers a
order by a.username
/
set termout on

select * from table(dbms_xplan.display_cursor (format=>'iostats last alias'))
/

set termout off
select /*+ qb_name(mainblk) gather_plan_statistics */ a.username customer_name,
 nvl(count(b.owner),0) cnt_orders,
 nvl(round(avg(b.object_id),2),0) avg_order_amt
from scalar_allusers a, scalar_allobjects b
where b.owner(+) = a.username
group by a.username
order by a.username
/
set termout on

select * from table(dbms_xplan.display_cursor (format=>'iostats last alias'))
/

Creating the two tables:

create table scalar_allobjects as select * from all_objects;

alter table scalar_allobjects add (constraint allobjects_pkey primary key (object_id));

create table scalar_allusers as select * from all_users;

alter table scalar_allusers add (constraint allusers_pk primary key (user_id));

exec dbms_stats.gather_table_stats (user, 'scalar_allobjects', method_opt => 'for all columns size 1', -
     estimate_percent => 100, cascade => TRUE, no_invalidate => FALSE) ;
    
exec dbms_stats.gather_table_stats (user, 'scalar_allusers', method_opt => 'for all columns size 1', -
     estimate_percent => 100, cascade => TRUE, no_invalidate => FALSE) ;


Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
With 12, Oracle introduced SQL Plan Directives, which in my view is really giving the optimizer a
chance to learn from its mistakes.   This is a step in the direction of a leaning optimizer.  In a way the optimizer is taking notes on how something ran and if not quite right, this note will help it not make the same mistake again.  It’s limited in 12.1 to just one type DYNAMIC_SAMPLINGand in 12.2 another one appears DYNAMIC_SAMPLING_RESULT.  Right now I’m going to focus on the first one.

So when it says that the directive is to do dynamic sampling, is that like doing a table level dynamic sample?  That seems like over kill since the directives will have column names associated with them.  I suspected that they were column based, so I set out to prove this to be true or false.  And the tool I used was the good old 10053 trace, a trace of a hard parse. 

The test bears out that yes it is doing the sampling based on the columns of the directive.  The rest of this is a summary of my test.  If anyone would like to run this test on their own, let me know and I can send you the files to set up the test tables and the like. 

I used the newer DBMS_SQLDIAG.DUMP_TRACE technique to get the 10053 trace.  This is very convenient as I can run the query then ask for a 10053 trace on a given SQL_IDand CHILD_NUMBER.

Here is my test case SQL:
SQL> get jcomp_opt
  1  select /*+ qb_name(opt) */b.object_name, b.object_type, a.username
  2    from allusers_tab a, big_tab b
  3   where a.username = b.owner
  4     and b.object_type = 'PROCEDURE'
  5*    and a.username not in ('SYS','SYSTEM')

ALLUSERS_TAB is quite small with 48 rows and BIG_TAB has 2,422,880 rows.  The “not in” predicate on the BIG_TABtable gives the optimizer some math issues and it over calculates the cardinality for BIG_TAB.  It thinks at first it’s getting about 65,000 rows when in reality it only gets just under 3,000.  Because of this mismatch, after a couple runs I see this for the plan:

SQL> select * from table(dbms_xplan.display_cursor('gnshwskp49773',2, 'allstats last'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------

SQL_ID  gnshwskp49773, child number 2
-------------------------------------
select /*+ qb_name(opt) */b.object_name, b.object_type, a.username
from allusers_tab a, big_tab b  where a.username = b.owner    and
b.object_type = 'PROCEDURE'    and a.username not in ('SYS','SYSTEM')

Plan hash value: 3435153054

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |      1 |        |   2784 |00:00:00.01 |    1773 |
|   1 |  NESTED LOOPS                        |                 |      1 |   1576 |   2784 |00:00:00.01 |    1773 |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| BIG_TAB         |      1 |   1610 |   2784 |00:00:00.01 |    1584 |
|*  3 |    INDEX RANGE SCAN                  | BIG_OBJTYPE_IDX |      1 |   3403 |   3424 |00:00:00.01 |     199 |
|*  4 |   INDEX UNIQUE SCAN                  | USERNAME_PK     |   2784 |      1 |   2784 |00:00:00.01 |     189 |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("B"."OWNER"<>'SYS' AND "B"."OWNER"<>'SYSTEM'))
   3 - access("B"."OBJECT_TYPE"='PROCEDURE')
   4 - access("A"."USERNAME"="B"."OWNER")
       filter(("A"."USERNAME"<>'SYS' AND "A"."USERNAME"<>'SYSTEM'))

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 2 Sql Plan Directives used for this statement

Key for this test is the note about the use of the Plan Directives.    The directives on the table are:
OBJECT_NAME  DIR_ID               COL_NAME     OBJECT_TYPE  TYPE
------------ -------------------- ------------ ------------ -----------------------
BIG_TAB      11891983782874668880              TABLE        DYNAMIC_SAMPLING_RESULT
BIG_TAB      14819284582793040278 OBJECT_TYPE  COLUMN       DYNAMIC_SAMPLING
BIG_TAB      14819284582793040278              TABLE        DYNAMIC_SAMPLING
BIG_TAB      8668036953221628977  OBJECT_TYPE  COLUMN       DYNAMIC_SAMPLING
BIG_TAB      8668036953221628977  OWNER        COLUMN       DYNAMIC_SAMPLING
BIG_TAB      8668036953221628977               TABLE        DYNAMIC_SAMPLING
BIG_TAB      8700850869231480407               TABLE        DYNAMIC_SAMPLING_RESULT

There are really 4 directives, but only the dynamic sampling ones are of interest for this test.   So what are these dynamic sampling directives really doing?  Well to find out I looked in the 10053 trace of this SQLID and child number and I found three queries being run on the table, one for each first directive, and two on the second:

Plan directive ID 8668036953221628977 has this one (I’ll refer to this one as 977 from now on, the last three digits of the directive ID):
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel  */ NVL(SUM(C1),0) FROM (SELECT /*+ qb_name("innerQuery") NO_INDEX_FFS( "B")  */ 1 AS C1 FROM "BIG_TAB" "B" WHERE ("B"."OBJECT_TYPE"='PROCEDURE') AND ("B"."OWNER"<>'SYS') AND ("B"."OWNER"<>'SYSTEM')) innerQuery (objid = 9445728533958271359)

Plan directive ID 14819284582793040278 has these two (I’ll refer to this one as 278):
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel  OPT_ESTIMATE(@"innerQuery", TABLE, "B", ROWS=1610) */ NVL(C1,0), NVL(C2,0), NVL(C3,0) FROM (SELECT /*+ qb_name("innerQuery") INDEX( "B" "BIG_OBJTYPE_IDX")  */ COUNT(*) AS C1, 4294967295 AS C2, COUNT(*) AS C3  FROM "BIG_TAB" "B" WHERE ("B"."OBJECT_TYPE"='PROCEDURE')) innerQuery (objid = 12743291823137504172)

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel  OPT_ESTIMATE(@"innerQuery", TABLE, "B", ROWS=1610) OPT_ESTIMATE(@"innerQuery", INDEX_SCAN, "B", "BIG_OBJTYPE_IDX", ROWS=3424) */ NVL(C1,0), NVL(C2,0), NVL(C3,0) FROM (SELECT /*+ qb_name("innerQuery") INDEX( "B" "BIG_OBJTYPE_IDX")  */ COUNT(*) AS C1, 4294967295 AS C2, COUNT(*) AS C3  FROM "BIG_TAB" "B" WHERE ("B"."OBJECT_TYPE"='PROCEDURE')) innerQuery (objid = 12743291823137504172)

When I ran the query in 977 (after taking out the last bit starting with innerQuery) it’s really just counting up how many rows in BIG_TABmatch the given predicates on the two columns.  In this case its 2,784.  This is then used to recalculate the cardinality of BIG_TAB from 65,483 to 1,610 for the full table scan estimate.  This line is in the 10053 trace  after the select doing the count:

Single Tab Card adjusted from 65483.243243 to 1610.000000 due to adaptive dynamic sampling

For 278 it's really about using the index on the OBJECT_TYPEcolumn.  Notice that in these queries (which is done after 977) they both use the 1,610 number as a corrected cardinality with the OPT_ESTIMATE hint in them.  And the second one is also using the value retrieved from the first one (3,424) as a corrected cardinality on the index also with an OPT_ESTIMATEhint.   Clearly the selects associated with the SQL Directives are working together and build off each other.

Both the queries for 278 are doing counts on the table for just the OBJECT_TYPEpredicate.  I’m not sure what it does the count twice in both queries as the C1 and C3 columns in the inner query, and 4,294,967,295 literal for C2 is odd as well.  That number only appears in these queries; it’s nowhere else in the trace.  It is the max values for an unsigned 32 bit integer, which is interesting but I’m not sure what that has to do with anything.  For this test, the number both queries come back with is 3,424 for both C1and C3.  This value is then used for the index cardinality. This line is in the 10053 trace after the two selects doing the counts are done:

Index Card adjusted from 65893.938102 to 3424.000000 due to adaptive dynamic sampling

Maybe the two counts for C1 and C3can be different in other version of these queries, but here they both are the exact same thing ’COUNT(*)’ so they can’t be different in this version of the query.  I may investigate this more in the future, but my testing this time is done.   

The conclusion of the test is yes, the dynamic sample is really being done on the columns as listed in the directive, also it does just the predicates in the sampling hence it should get very good counts to base the plan on.  The bad news is that this sampling could take some time on really big tables with lots of directives, which is likely why this feature is now turned off by default in 12.2.  Hopefully the great folks out in Redwood Shores will figure out a way to rein this in a bit and make it the great feature it appears to be.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


A while back I wrote a post about getting a better idea of index usage using a query on v$sql_plan_statistics_all.  You can see that post here.   New in 12.2 is a much better way in monitor index usage.  While I believe still falling a little short of the mark, it’s hugely better than the YES/NOflag of days before.

One short coming is that it still counts collecting stats as a use of the index.  OK I get it, the index was scanned for the collecting of stats, but really that is not a use that the majority of us are interested in.  What we want to know is, when was it used to satisfy a query.  What this means is that realistically no index would every have a zero for usage since even unused indexes are going to have stats collected on them. 

Also this is on by default in 12.2, which is good.  Also it by default uses some sort of sampling technique.  You can set it such that it will catch all uses of the index, but likely that may have a negative impact on performance in a high use system.  Thanks to Franck Pachot for his post showing the parameter to do this, it can be set at the system or session level:

ALTER SESSION SET "_iut_stat_collection_type"=ALL;
ALTER SESSION SET "_iut_stat_collection_type"=SAMPLED;

OK so how about a little test.   One note is that the flush of the information collected only happens every 15 minutes.  So if you run this yourself you’ll need to wait 15 minutes to see the result of the final query.

set echo on
set feedback on
ALTER SESSION SET "_iut_stat_collection_type"=ALL;
drop table emp99 purge;

create table emp99 as select * from emp;

create index emp99_ename on emp99 (ename);

exec dbms_stats.gather_table_stats(ownname=> 'OP', tabname => 'EMP99');


COLUMN OWNER FORMAT A6
COLUMN NAME FORMAT A11
COLUMN TOTAL_ACCESS_COUNT HEADING ACCESS_CNT FORMAT 999,999
COLUMN TOTAL_EXEC_COUNT HEADING EXEC_CNT FORMAT 999,999
COLUMN TOTAL_ROWS_RETURNED HEADING RETURNED_ROWS FORMAT 999,999

SELECT owner, name, total_access_count,
 total_exec_count, total_rows_returned, last_used
FROM   dba_index_usage
where name = 'EMP99_ENAME'
ORDER BY owner, name;

ALTER SESSION SET "_iut_stat_collection_type"=SAMPLED;

The output from query (after waiting 15 minutes) was this:
SQL>
SQL> SELECT owner, name, total_access_count,
  2   total_exec_count, total_rows_returned, last_used
  3  FROM   dba_index_usage
  4  where name = 'EMP99_ENAME'
  5  ORDER BY owner, name;

OWNER  NAME        ACCESS_CNT EXEC_CNT RETURNED_ROWS LAST_USED
------ ----------- ---------- -------- ------------- ---------
OP     EMP99_ENAME          1        1            14 28-AUG-17

1 row selected.

So pretty clearly it’s counting the collecting of stats as a usage.  There is also a set of columns in the table that give you a histogram like view of the usage of the index.  

BUCKET_0_ACCESS_COUNT
BUCKET_1_ACCESS_COUNT
BUCKET_2_10_ACCESS_COUNT
BUCKET_2_10_ROWS_RETURNED
BUCKET_11_100_ACCESS_COUNT
BUCKET_11_100_ROWS_RETURNED
BUCKET_101_1000_ACCESS_COUNT
BUCKET_101_1000_ROWS_RETURNED
BUCKET_1000_PLUS_ACCESS_COUNT
BUCKET_1000_PLUS_ROWS_RETURNED

The access buckets appear to mean just that, how many times was a query run where the number of rows were returned.  Interestingly a collection counts as the number of rows used for the statistics.  For example my EMP99 table has 14 columns in it and that run showed up in the 11 to 100 bucket.  The rows returned also mean what they say.  Notice there is not a rows returned bucket for the first two access buckets.   This is because those buckets return either 1 or no rows.  Whereas the other buckets are a range of rows returned, so it tracks how many were really turned per accesses in those buckets.  Pretty cool really.  

For example here I’ve run a query that returned one row twice and no rows once and had the stats collection and I see this output for the buckets (not all of them just the first couple, the rest were 0).  I used a column command to format the column data.


SQL> select  BUCKET_0_ACCESS_COUNT,  BUCKET_1_ACCESS_COUNT, BUCKET_2_10_ACCESS_COUNT,
  2  BUCKET_2_10_ROWS_RETURNED, BUCKET_11_100_ACCESS_COUNT, BUCKET_11_100_ROWS_RETURNED
  3  FROM   dba_index_usage
  4  where name = 'EMP99_ENAME';

 A_0  A_1 A_2_10 R_2_10 A_11_100 R_11_100
---- ---- ------ ------ -------- --------
   1    2      0      0        1       14

With this information it sure makes it much better to know what indexes are in use and which ones are not.  And this will make it much easier to determine which indexes you need to keep and which ones you need to take a serious look at to see if you really need them.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


A few years back I post a block of code that would take as input a sql file with one sql statement in it and return the SQL_IDand HASH_VALUEusing DBMS_SQL_TRANSLATOR.  The post is here.  It turns out there was a pretty big flaw in that code.  It was assuming there would only be one slash (/) at the end of the statement.  Ack!  Of course if you use the /*+ for hints or /*for a comment, then it would trim off the file at the first slash it found which is clearly wrong.

So here is the new code:
set termout on heading off feedback off verify off
-- File name hgetsqlid.sql
-- Get the SQLID/HASH for a SQL statement in a file
-- The file must contain only ONE SQL statement
-- The Directory "SQL_FILE_LOCATION" must be defined
-- This is the location where this will read the file from
-- Example
-- CREATE OR REPLACE DIRECTORY SQL_FILE_LOCATION AS '/home/oracle/OP';
-- The file MUST end with a "/" on last line
-- Example:
-- Select * from emp
-- /
--
-- May 2015 RVD initial coding
-- Aug 2017 RVD fixed issue with finding the / in commment or hint would trim the file too short
--              some formating and other minor changes

set tab off
set serveroutput on
column directory_path format a100 wrap
prompt *********************************************
prompt Get SQL_ID and HASH VALUE for a SQL statement
prompt One statement in the file and must end with /
prompt Current setting of SQL_FILE_LOCATION:
prompt *********************************************
select directory_path from dba_directories where directory_name = 'SQL_FILE_LOCATION';
prompt *********************************************
accept hgetsqlid_file prompt 'Enter the full file name (with extension): '

DECLARE
    v_bfile BFILE;
    v_clob  CLOB;
    v_sqlid VARCHAR2(13);
    v_sqlhash number;
    v_slash integer := 0;
    e_noslash exception;
BEGIN
    v_bfile := BFILENAME ('SQL_FILE_LOCATION', '&hgetsqlid_file');
    IF DBMS_LOB.FILEEXISTS (v_bfile) = 1 THEN
        DBMS_LOB.OPEN (v_bfile);
        DBMS_LOB.CREATETEMPORARY (v_clob, TRUE, DBMS_LOB.SESSION);
        DBMS_LOB.LOADFROMFILE (v_clob, v_bfile, DBMS_LOB.GETLENGTH (v_bfile));
        DBMS_LOB.CLOSE (v_bfile);
        -- remove all carrage returns (ASCII 13) from the clob
        -- each line must end with only a line feed (ASCII 10)
        v_clob := replace(v_clob, CHR(13) , '');
        -- trims off training spaces at the end of the file
        v_clob := rtrim(v_clob);
        -- trim off anything else at the end back to the /
        while (dbms_lob.substr(v_clob,1,(DBMS_LOB.GETLENGTH (v_clob)))) <> '/'
          loop
           DBMS_LOB.TRIM (v_clob,(DBMS_LOB.GETLENGTH(v_clob))-1);
          end loop;
        -- remove any trailing spaces or tabs (ASCII 9)
        while DBMS_LOB.INSTR (v_clob, ' '||CHR(10)) > 0 or
              DBMS_LOB.INSTR (v_clob, CHR(9)||CHR(10)) > 0
             loop
             v_clob := replace(v_clob, ' '||CHR(10), CHR(10));
             v_clob := replace(v_clob, CHR(9)||CHR(10), CHR(10));
             end loop;
        -- Find the / at the end of the file
        v_slash := DBMS_LOB.INSTR (v_clob,'/',DBMS_LOB.GETLENGTH(v_clob));
        IF v_slash = 0 THEN RAISE e_noslash; END IF;
        -- remove the line with the slash and everything after it
        DBMS_LOB.TRIM (v_clob,v_slash-2);
        v_sqlid   :=  DBMS_SQL_TRANSLATOR.SQL_ID (v_clob);
        v_sqlhash :=  DBMS_SQL_TRANSLATOR.SQL_HASH (v_clob);
        dbms_output.put_line ('*************************');
        dbms_output.put_line ('The SQL ID is '||v_sqlid);
        dbms_output.put_line ('Hash value is '||v_sqlhash);
        dbms_output.put_line ('*************************');
    ELSE
        dbms_output.put_line ('** File not found **');
    END IF;
    EXCEPTION
        when e_noslash then
        dbms_output.put_line ('-+-+-+-+-+-+-+-+-');
        dbms_output.put_line ('Slash not found!');
        dbms_output.put_line ('-+-+-+-+-+-+-+-+-');
END;
/

set serveroutput off
set heading on

And an example of using the code:

SQL> @hgetsqlid
*********************************************
Get SQL_ID and HASH VALUE for a SQL statement
One statement in the file and must end with /
Current setting of SQL_FILE_LOCATION:
*********************************************

C:\OP
*********************************************
Enter the full file name (with extension): with.sql
*************************
The SQL ID is 2n63z3ab978kn
Hash value is 2526257748
*************************
SQL> @with

       COUNT(*)
---------------
          75875
          81280
SQL> select PREV_SQL_ID,PREV_CHILD_NUMBER  from v$session WHERE audsid = userenv('sessionid');

PREV_SQL_ID   PREV_CHILD_NUMBER
------------- -----------------
2n63z3ab978kn                 0
SQL>


SQL> get with
  1  select /*+ qb_name(allobjs) */ count(*)
  2    from withlab_allobjects a,
  3  (select /*+ qb_name(owners1) */distinct owner username
  4     from withlab_allobjects ) owners
  5   where a.owner = owners.username
  6  union all
  7  select /*+ qb_name(dbaobjs) */ count(*)
  8    from withlab_dbaobjects d,
  9  (select /*+ qb_name(owners2) */distinct owner username
 10     from withlab_allobjects ) owners
 11*  where d.owner = owners.username
SQL>
Enjoy!!
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
In case you missed this, here I am in video giving a 2 Minute tech tip on QB_NAME.  There is a rumor I mention my favorite water hole in this, you'll have to watch it to see if I do or not.

https://www.youtube.com/watch?v=3t5NHtPKji0
Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview