This blog provides useful information and Tips on Oracle database, Oracle SQL, Oracle PL/SQL, DBMS job scheduler, oracle performance tuning, Oracle SQL Query Optimization, Oracle Interview Questions, Various Oracle Error Codes, Oracle Regular Expressions and String Aggregation for Oracle DBA and Database Developers.
In this blog post, I am referring to the benchmark results on Oracle Cloud Infrastructure published by Accenture in Oct-2017. I read this White Paper last week and decided to cover some key points:
Accenture conducted hundreds of tests across multiple applications, looking at performance, stability and costs. Accenture examines Oracle Database, and associated applications and networking, in the cloud. These tests compared Oracle’s cloud offerings with those of another leading cloud vendor.
The test results were very positive for Oracle. Using Oracle’s Cloud Infrastructure, Accenture was able to execute OLTP transactions up to 7.8 times faster, compared to the other cloud provider. On the other hand, when Accenture attempted to replicate these tests with the other cloud provider, they were unable to achieve the same kinds of results in terms of total transactions processed. By adding more processing power, they were eventually able to do so—but the additional hardware meant significant additional expense. In the end, the tests showed that the Oracle Cloud Infrastructure can provide up to a more than eight-fold reduction in costs for a given amount of capacity, compared to the other cloud provider. Figure 1:
Accenture’s analysis of the network capabilities of Oracle Cloud Infrastructure found high levels of consistency and performance. Accenture researchers found that Oracle Cloud Infrastructure provided much lower latency than the other cloud when connecting between zones or different data centers within a single region. In fact, Oracle’s peak latency levels were up to nearly five times less than those of the other cloud. Figure 2:
Researchers also found that Oracle Cloud Infrastructure provided better scalability and high availability. It is worth noting that a variety of features found in Oracle are not available in other public clouds. For example, Oracle Cloud offers Oracle Real Application Clusters (RAC).
Finally, researchers simulated a full data center outage multiple times to test business continuity. The PeopleSoft application that was being run for the test was down for less than 20 seconds. Simulated outage in the test involved the primary data center.
My take on above benchmark results and Oracle Cloud: In my opinion, Oracle has done a great work by minimizing the network letency in Bare-Metal. In Oracle 18c features like Exadata, RAC, ASM, Active Data Guard and Flashback Technology also give huge performance boost. Also Oracle guarantees to cut Amazon bill in half in contract. So Oracle Cloud is providing all the benefits Performance + Features and Cost Effective, which anyone can ask for. Oracle is going to give tough time to other Service Provides like Amazon and Microsoft for Cloud Space. Consumers will be the one who will get the benefits (features, service and low pricing).
ORA-30009: Not enough memory for CONNECT BY operation Cause: The memory size was not sufficient to process all the levels of the hierarchy specified by the CONNECT BY clause. Action: In WORKAREA_SIZE_POLICY=AUTO mode, set PGA_AGGREGATE_TARGET to a reasonably larger value. Or, in WORKAREA_SIZE_POLICY=MANUAL mode, set SORT_AREA_SIZE to a reasonably larger value.
With this post, I am sharing a very simple tip which I recently used to avoid ORA-30009. Last week one of my colleague faced ORA-30009 and when he googled following two solutions were given in most of sites: 1. Use Pipeline function using loop instead of CONNECT BY. 2. Increase PGA_AGGREGATE_TARGET, because this error came due to shortage of PGA memory.
Both of the above solutions will work, but in many cases ORA-30009 does not require any of above steps and can be fixed by breaking SQL to generate two or more sets of data and then using Cartesian product.
Let's do a quick demo:
nimish@garg> select count(*) from dual connect by level <= 100000000; select count(*) from dual connect by level <= 100000000 * ERROR at line 1: ORA-30009: Not enough memory for CONNECT BY operation
Great! we faced ORA-30009, now let's fix it :)
nimish@garg> select count(*) from 2 (select level from dual connect by level <= 10000), 3 (select level from dual connect by level <= 10000);
COUNT(*) ---------- 100000000
Done! I hope you have enjoyed this little trick on ORA-30009. Feel free to comment.
Ever since I have written Top features of Oracle 12c R2 database, many of my friends have asked me which one is my favourite one. I loved a-lot of features of Oracle 12c database but as a developer my favorite are In-Memory feature, JSON Support and Enhancement on Table Partitions. I think Oracle has done a great job by making maintenance of partitions very easy and almost Online. So With this post I am trying to cover Top 10 New Partitioning Features with Oracle 12c database (both of R1 and R2)
1. Automatic List Partition With Automatic List Partitions, Oracle 12.2 Database will create new partition for every distinct value of list partitioning key. Example:
CREATE TABLE CUSTOMERS ( CUST_ID NUMBER NOT NULL, CUST_NAME VARCHAR2 (200 BYTE) NOT NULL, STREET_ADDRESS VARCHAR2 (40 BYTE) NOT NULL, CITY_ID NUMBER NOT NULL, COUNTRY_CD VARCHAR2 (5) NOT NULL, CUST_PHONE VARCHAR2 (50 BYTE), JOINING_DATE DATE ) PARTITION BY LIST (COUNTRY_CD) AUTOMATIC ( PARTITION CUSTOMERS_IND VALUES ('IND'), PARTITION CUSTOMERS_USA VALUES ('USA') );
You can even alter a old List Partitioned table to Automatic List Partitioned by simply
ALTER TABLE CUSTOMERS SET PARTITIONING AUTOMATIC;
2. Partition a Non-partitioned Table Online From Oracle 12c R2, We can easily convert a Nonpartitioned Table to a Partitioned Table Online (along with Indexes) without using DBMS_REDEFINITION. Lets suppose CUSTOMERS table is non-partitioned
ALTER TABLE CUSTOMERS MODIFY PARTITION BY RANGE (JOINING_DATE) ( PARTITION CUSTOMERS_P2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','DD-MON-YYYY')), PARTITION CUSTOMERS_P2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','DD-MON-YYYY')), PARTITION CUSTOMERS_P2017 VALUES LESS THAN (TO_DATE('01-JAN-2018','DD-MON-YYYY')) ) ONLINE UPDATE INDEXES ( CUSTOMERS_PK GLOBAL, CUST_JOININGDT_INDX LOCAL );
3. Table Creation for Partition Exchange Before Oracle 12c R2, all of us must have faced "ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION" a-lot of time. With "FOR EXCHANGE WITH TABLE" clause, Oracle 12c R2 has made our life easier by creating the new table empty with correct structure
CREATE TABLE CUSTOMERS_TEMP FOR EXCHANGE WITH TABLE CUSTOMERS;
and now we can use CUSTOMERS_TEMP in ALTER TABLE EXCHANGE PARTITION without any issue
ALTER TABLE CUSTOMERS EXCHANGE PARTITION CUSTOMERS_P2017 WITH TABLE CUSTOMERS_TEMP WITHOUT VALIDATION UPDATE GLOBAL INDEXES;
4. Multiple Partition Management From Oracle 12c R2, we can do most of the maintenance task on Multiple Partitions in a single command, like a) Add multiple partitions
alter table CUSTOMERS add PARTITION CUSTOMERS_P2018 VALUES LESS THAN (TO_DATE('01-JAN-2019','DD-MON-YYYY')), PARTITION CUSTOMERS_P2019 VALUES LESS THAN (TO_DATE('01-JAN-2020','DD-MON-YYYY')), PARTITION CUSTOMERS_P2020 VALUES LESS THAN (TO_DATE('01-JAN-2021','DD-MON-YYYY'));
b) Drop multiple partitions
alter table CUSTOMERS drop partitions CUSTOMERS_P2018, CUSTOMERS_P2019, CUSTOMERS_P2020;
c) Truncate multiple partitions
alter table CUSTOMERS truncate partitions CUSTOMERS_P2018, CUSTOMERS_P2019, CUSTOMERS_P2020;
d) Merge multiple partitions
alter table CUSTOMERS merge partitions CUSTOMERS_P2018, CUSTOMERS_P2019, CUSTOMERS_P2020;
e) Split multiple partitions
alter table CUSTOMERS split partitions CUSTOMERS_P2018 partition CUSTOMERS_P2018Q1 values less than (TO_DATE('01-APR-2018','DD-MON-YYYY')), partition CUSTOMERS_P2018Q2 values less than (TO_DATE('01-JUL-2018','DD-MON-YYYY')), partition CUSTOMERS_P2018Q3 values less than (TO_DATE('01-OCT-2018','DD-MON-YYYY')), partition CUSTOMERS_P2018Q4;
5. Partial Indexes With 12c, we can create an INDEX on subset of partitions. We need to set INDEXING OFF for the individual partition while creating the table
CREATE TABLE CUSTOMERS ( CUST_ID NUMBER NOT NULL, CUST_NAME VARCHAR2 (200 BYTE) NOT NULL, STREET_ADDRESS VARCHAR2 (40 BYTE) NOT NULL, CITY_ID NUMBER NOT NULL, COUNTRY_CD VARCHAR2 (5) NOT NULL, CUST_PHONE VARCHAR2 (50 BYTE), JOINING_DATE DATE ) PARTITION BY RANGE (JOINING_DATE) ( PARTITION CUSTOMERS_P2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','DD-MON-YYYY')) INDEXING OFF, PARTITION CUSTOMERS_P2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','DD-MON-YYYY')), PARTITION CUSTOMERS_P2017 VALUES LESS THAN (TO_DATE('01-JAN-2018','DD-MON-YYYY')) );
or we can alter them later
ALTER TABLE CUSTOMERS MODIFY PARTITION CUSTOMERS_P2015 INDEXING OFF;
Now to create Partial Indexes, we need to add INDEXING PARTIAL clause while creating index.
CREATE INDEX CUST_NAME_IDX ON CUSTOMERS(CUST_NAME) LOCAL INDEXING PARTIAL;
6. TRUNCATE partition Cascade With 12c R1, we can use Cascade keyword with TRUNCATE PARTITION command to TRUNCATE the referenced child partitions. The FOREIGN keys must be defined with ON DELETE CASCADE for TRUNCATE PARTITION .. CASCADE to work.
7. Moving Partitions Online With 12c, we can move table partitions and sub-partitions ONLINE i.e. without blocking DML operations. Parallel DML and direct path loads are not supported on the partitions being moved online.
8. Asynchronous Global Index Maintenance Global Indexes used to get UNUSABLE with various partition maintenance activities with pre Oracle 12c databases, and need to be maintained manually. With Oracle 12c if we DROP or TRUNCATE a Partition global index is not marked unusable and Index maintenance is done asynchronously behind the scene.
9. Incremental Statistics for Partitioned Tables By Incremental statistics Oracle can gather statistics at partition level and calculate the global-level statistics accurately, reducing the time and system resources utilization. Before Oracle 12c, Partition-level statistics were considered stale if any DML has been executed and were not used in Incremental Statistics. Now, we can set an incremental staleness threshold so that even after some DML operation partition statistics can be used by incremental statistics.
10. Read-Only Partitions Before Oracle 12c we can set a table as Read Only to disables DML operations on the table. Now we can do the same on the individual Partitions and sub-partitions.
Larry Ellison introduced the first Autonomous Database Cloud with Oracle Database 18c at Oracle OpenWorld 2017, which runs on EXADATA infrastructure. The Oracle Autonomous Database Cloud uses ground-breaking machine learning to eliminate the human labor associated with tuning, patching, updating and maintaining the database and includes the following capabilities
1. Self-Driving: Provides continuous adaptive performance tuning based on machine learning. Automatically upgrades and patches itself while running. Automatically applies security updates while running to protect against cyber-attacks. 2. Self-Scaling: Instantly resizes compute and storage without downtime. Cost savings are multiplied because Oracle Autonomous Database Cloud consumes less compute and storage than Amazon, with lower manual administration costs. 3. Self-Repairing: Provides automated protection from downtime. SLA guarantees 99.995 percent reliability and availability, which reduces costly planned and unplanned downtime to less than 30-minutes per year.
Following are the 10 key points Larry Ellison mentioned at Oracle OpenWorld 2017 1) This is the most important thing we’ve done in a long, long time. 2) The automation does everything. We can guarantee availability of 99.995 percent, less than 30 minutes of planned or unplanned downtime. 3) Amazon's best Oracle Database Service RDS is not true Elastic and Automated 4) Amazon’s guarantee doesn’t count when you’re down for any reason. It doesn’t include downtime or maintenance, or patching, and certainly doesn’t cover downtime if there’s a bug. 5) We guarantee your Amazon bill is cut in half – and that will be in your contract. 6) You’re automating away the job of database professionals. So we’ll see a migration of database skills to focus more on schema design, analytics and setting policies for what’s mission-critical. Database pros will have a lot more time to focus on securing data. 7) Autonomous Database Cloud usages dynamic elastic hardware allocation 8) Autonomous Database Cloud usages features like Exadata, RAC, ASM, Active Data Guard and Flashback Technology for System Failure, Patching, Upgarding, Regional Outage and User Error. 9) Database BOYL is introduced with Paas, and minimum configuration of 1 OCPU + 1 TB will cost 300$ per month with 30% discount with Monthly Dollar Volumn Commitment. 10) Oracle Database 18c with Autonomous cloud will be for Data-Warehouse from December 2017 and for OLTP from June 2018
Finally Larry Ellison compared different workload on Oracle and Amazon Cloud
My take on Oracle 18c Database with Autonomous Database Cloud: It is good that Oracle is going to introduced features like these on cloud and I hope Oracle Cloud lives to the expectations and pricing. Oracle is going to give tough fight to Amazon for Cloud Space and I hope consumers will be the one who will get the benefits (features, service and low pricing).
With this post I am simply sharing a SQL script, which I developed to export all Packages, Procedures, Functions and others from Oracle Database to individual files at client machine.
To execute this follow these simple steps: 1. save this script in directory with name "GenerateExportScript.sql" 2. open a command prompt or shell 3. go to that directory 4. connect to your Oracle Schema from SQLPlus 5. call this script like @GenerateExportScript.sql
Wait for some time while your code is being exported, and individual files for each Package, Package Body, Function, Procedure, Trigger and other objects are being generated at your client machine.
Here is the script:
-- ----------------------------------------------------------------------------------- -- File Name : GenerateExportScript.sql -- Author : Nimish Garg -- Description : Export all code in the schema and generate individual files -- Call Syntax : @GenerateExportScript -- Last Modified: 15/09/2017 -- -----------------------------------------------------------------------------------
set serveroutput on set termout off set heading off set feedback off set linesize 50 spool ExportCode.sql
declare l_ext varchar2(100); begin for c in (select distinct name, type from user_source where order by 1) loop
if c.type = 'PROCEDURE' then l_ext := '.prc'; elsif c.type = 'PACKAGE' then l_ext := '.pks'; elsif c.type = 'PACKAGE BODY' then l_ext := '.pkb'; elsif c.type = 'TRIGGER' then l_ext := '.trg'; elsif c.type = 'FUNCTION' then l_ext := '.fnc'; else l_ext := '.sql'; end if;
dbms_output.put_line('prompt set define off '); dbms_output.put_line('select decode(line,1,''create or replace '', '''' ) || text text from user_source where name = ''' || c.name || ''' and type = ''' || c.type || ''' order by type, line;' ); dbms_output.put_line('prompt /'); dbms_output.put_line('prompt set define on');
dbms_output.put_line('spool off'); dbms_output.put_line('set feedback on '); dbms_output.put_line('set heading on '); dbms_output.put_line('set termout on '); dbms_output.put_line('set linesize 100 '); dbms_output.put_line(chr(13) || chr(10)); dbms_output.put_line(chr(13) || chr(10)); end loop; end; /
There were mainly following 2 reasons against Foreign Keys: - Validation of Foreign Keys in Fact table is an overhead because ETL process has already verified the referential integrity of fact tables - Foreign Keys in fact tables, that dimension tables need to be loaded first and then we can load fact tables. It restrict ETL process to load both Dimension and Fact tables in parallel.
Above both points are valid in-terms of ETL loads, but without Foreign key Oracle Optimizer will miss vital information. Foreign Keys enforce data consistency and validate fact tables has no other data than of dimension table. Foreign Keys helps Oracle Optimizer to generate best execution plan. It enables CBO to perform query transformation like "Join Elimination" and "Query Rewrite" on Materialized Views. Last but not the least, Foreign Keys are self documentation tooland having Foreign Keys enables developers to easily find tables relation.
So now the Million Dollar Question is, Does Oracle provides any solution where we can get all benefits on Foreign Keys and can avoid the restrictions like slowing down the ETL load?
There can be following states of any constraints - DEFFERED / NOT DEFFERED - constraint checking can be deferred until the end of the transaction (COMMIT) - ENABLE / DISABLE - An enabled constraint ensures that all new data modifications upon table satisfy the conditions of the constraints. - VALIDATE / NOVALIDATE - Validated constraint ensure that all data that currently resides in the table satisfies the constraint. - RELY / NORELY - The RELY state provides you with a mechanism to tell Oracle that a given constraint is believed to be true, even if it is in DISABLE and NOVALIDATE state.
RELY Constraints in a Data Warehouse are not used for data validation, but they: - Enable more sophisticated Query Rewrites for materialized views. - Enable Optimizer Query transformation (Join Elimination) by CBO. - Enable tools to retrieve information regarding table relations directly from the Oracle.
Creating a RELY constraint is inexpensive and does not impose any overhead during ETL load. If Foreign Key is in RELY state, the Primary Key referred must also be in RELY state.
Now let's come to the practical part. Lets assume we have following basic Star Schema, each dimension table has Primary Key defined. Lets modify our tables as per suggestion of Oracle Data Warehousing Guide.
Step 1: Modify Primary Keys of all Dimension tables to enable RELY
nimish@garg> alter table d_date modify primary key rely; Table altered
nimish@garg> alter table d_product modify primary key rely; Table altered
nimish@garg> alter table d_supplier modify primary key rely; Table altered
Step 2: Create Foreign Keys on Fact table with RELY disable novalidate
The last thing to note here is to enable Query Rewrite and Join Elimination we need to set query_rewrite_enabled = TRUE and query_rewrite_integrity = TRUSTED for session or at system level. To conclude this article I must say RELY constraints are very powerful and unique feature of Oracle Database and is there from Oracle Database 8i (or before) but still very unknown. RELY constraints enables us to achieve all benefits on Foreign Keys and can avoid the restrictions which slows down the ETL processing.
The long weekend is always fun, especially when some Oracle Event is scheduled. Last Sunday, I attended awesome "OTNYatra", an event where we can gain knowledge from Top Experts in World, and also connect with Oracle Database professionals and discuss problems, doubts and solutions.
In one of the discussion I was pulled was "Why should we create Primary and Foreign Keys (Constraints) in Database when data consistency is guaranteed by Application?" My answer was simple, it allows Oracle Optimizer (CBO) to have better knowledge of data and generate optimal execution plans, plus never trust application [developer] :)
Without going into the details of discussions, let us run and test some SQLs on my favorite tables EMP and DEPT, which has following Referential Integrity Constraints. - Primary Key on DEPT.DEPTNO - Primary Key on EMP.EMPNO - foreign Key on EMP.DEPTNO referencing DEPT.DEPTNO
Example 1: Get data of all Employees whose department is listed in DEPT table
nimish@garg> select e.* from emp e, dept d where e.deptno = d.deptno; 13 rows selected.
Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("E"."DEPTNO" IS NOT NULL)
As you can see here, Oracle simply removed the DEPT table from execution and added predicate "e.deptno is not null" because Oracle knows that only allowed values in E.DEPTNO are NULL or values from DEPT.DEPTNO.
Example 2: Get data of all Employees whose department is listed in DEPT table, here I tried my SQL with IN clause.
nimish@garg> select e.* from emp e where e.deptno in (select d.deptno from dept d); 13 rows selected.
Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("E"."DEPTNO" IS NOT NULL)
Same result as Example 1. Oracle Optimizer removed DEPT table from execution plan and added "e.deptno is not null".
For my last example I went one more step ahead, and added a NOT NULL constraint on EMP.DEPTNO to give more information to Oracle Database Optimizer. Example 3: Get data of all Employees whose department is not in DEPT table.
nimish@garg> alter table emp modify deptno not null; Table altered.
nimish@garg> select e.* from emp e where e.deptno not in (select d.deptno from dept d); no rows selected
Execution Plan ---------------------------------------------------------- Plan hash value: 3896240783 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 38 | 0 (0)| | |* 1 | FILTER | | | | | | | 2 | TABLE ACCESS FULL| EMP | 13 | 494 | 2 (0)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(NULL IS NOT NULL)
In above example, Oracle Optimizer knows that there can not be any row in EMP table which has DEPTNO not present in DEPT table and also EMP.DEPTNO can't be null. So it removed the DEPT table completely.
The three examples we saw are actually of Oracle "Join Elimination" transformation, which is a technique of removing one or more tables are from execution without changing the output. There must be Primary Key and Foreign Key relationship defined among table to allow Oracle to perform "Join Elimination" transformation.
With this Post I am trying to explain Oracle Query Transformation feature called Complex View Merging. How this feature failed our SQLs in Oracle Database 12c, which is running fine in Oracle Database 11g R2 and what we did to fix this issue. Before going directly to the issue let us read some definitions (from Oracle Documentation) of little heavy words which some of the reader might not know:
Query Transformations: For some statements, the Oracle Database Optimizer (CBO) determines whether it is advantageous to rewrite the original SQL statement into a semantically equivalent SQL statement with a lower cost. When a viable alternative exists, the Oracle Database Optimizer (CBO) calculates the cost of the alternatives separately and chooses the lowest-cost alternative. This process is called Query Transformations.
Complex View Merging: In complex view merging, the optimizer merges views containing GROUP BY and DISTINCT views. Like simple view merging, complex merging enables the optimizer to consider additional join orders and access paths. The optimizer can delay evaluation of GROUP BY or DISTINCT operations until after it has evaluated the joins. Delaying these operations can improve or worsen performance depending on the data characteristics. If the joins use filters, then delaying the operation until after joins can reduce the data set on which the operation is to be performed. Evaluating the operation early can reduce the amount of data to be processed by subsequent joins, or the joins could increase the amount of data to be processed by the operation. The optimizer uses cost to evaluate view merging and merges the view only when it is the lower cost option.
Now we know when is Query Transformations and Complex View Merging, lets looks at the issue which we faced on Oracle 12c.
nimish@garg> SELECT DISTINCT 2 Employee_Code Integration_Id, 3 Employee_Code Leaf_Node_Integration_Id, 4 Fst_Name, 5 Last_Name, 6 Wid Emp_Wid, 7 Wid_Skey Emp_Wid_Skey, 8 Job_Title, 9 Department_Name, 10 Level_Depth, 11 Role_Nm, 12 Level1_Depth, 13 CASE 14 WHEN Level1_Depth = 1 15 THEN 16 (SELECT Res_Static.Integration_Id 17 FROM Wc_Res_Static_D Res_Static 18 WHERE Res_Static.R_Type = 'LEVEL_DEPTH' 19 AND Res_Static.Level_Depth = Level1_Depth) 20 END Level1_Role_Nm, 21 Emp_Hire_Dt 22 FROM ( SELECT Employee_Code, 23 MAX (Fst_Name) Fst_Name, 24 MAX (Last_Name) Last_Name, 25 MAX (Wid) Wid, 26 MAX (Wid_Skey) Wid_Skey, 27 MAX (Job_Title) Job_Title, 28 MAX (Department_Name) Department_Name, 29 MAX (Level_Depth) Level_Depth, 30 MAX (Job_Role_Nm) Role_Nm, 31 MAX (Emp_Hire_Dt) Emp_Hire_Dt, 32 1 Level1_Depth 33 FROM ( SELECT Employee_Code, 34 Wid, 35 Wid_Skey, 36 Job_Title, 37 Department_Name, 38 Fst_Name, 39 Last_Name, 40 Level_Depth, 41 Job_Role_Nm, 42 Emp_Hire_Dt 43 FROM ( SELECT Employee_Code, 44 Level_Depth, 45 Emp_Wid Wid, 46 Emp_Wid_Skey Wid_Skey, 47 Job_Title Job_Title, 48 Department_Name Department_Name, 49 Fst_Name Fst_Name, 50 Last_Name Last_Name, 51 Job_Role_Nm Job_Role_Nm, 52 Emp_Hire_Dt, 53 LTRIM (SYS_CONNECT_BY_PATH (Employee_Code,'*'),'*') Emp_Hierarchy 54 FROM Res_Emp_Hier ResearchTemp 55 WHERE Employee_Code = 64539 56 START WITH Manager_Code IS NULL 57 CONNECT BY NOCYCLE PRIOR Employee_Code = Manager_Code 58 ) 59 CONNECT BY LEVEL <= REGEXP_COUNT (Emp_Hierarchy,'[^*]+',1) 60 ORDER BY LEVEL 61 ) 62 GROUP BY Employee_Code 63 ); AND Res_Static.Level_Depth = Level1_Depth) * ERROR at line 19: ORA-00979: not a GROUP BY expression
As we can see in above example, SQLPLUS is throwing ORA-00979 exception at line 19, which is at outer level of our GROUP BY SQL, which means Oracle 12c R1 is performing VIEW MERGE MERGING - QUERY TRANSFORMATION and making the SQL invalid. I also tested the same SQL in ORACLE 11g R2 server, where it was working fine.
So to fix the issue I added "NO_QUERY_TRANSFORMATION" hint in the SQL. The NO_QUERY_TRANSFORMATION hint instructs the optimizer to skip all query transformations, including but not limited to OR-expansion, view merging, subquery unnesting, star transformation, and materialized view rewrite. After the fix I executed the SQL and It started working fine.
I hope you have enjoyed reading this article. You might have learnt something new or may have some suggestions. Please do post you feedback and suggestions in comment-box
Oracle SQL Developer is a free integrated development environment that simplifies the development and management of Oracle Database in both traditional and Cloud deployments. Oracle SQL Developer is my favorite IDE even over Toad and PL/SQL Developer for following main reasons: 1. Oracle SQL Developer provides all features which any developer requires 2. Oracle SQL Developer has DBA console for managing and monitoring database 3. And Yes, Oracle SQL Developer is free :)
Now coming to the topic -
AUTOTRACE in Oracle SQL Developer comes with awesome features like Hotspot and Compare Different Plans and statistics. We need PLUSTRACE role to generate AUTOTRACE in SQLPLUS, but what if we want to generate AUTOTRACE in Oracle SQL Developer. Even if your schema has PLUSTRACE role, Oracle SQL Developer does not execute AUTOTRACE and command fails with "Insufficient Privileges" as below -
So as we can see in above dialog box that Oracle SQL Developer needs following grants for AUTOTRACE - SELECT_CATALOG_ROLE - SELECT ANY DICTIONARY
If you request your DBA for above privileges, they may deny it as these are very powerful privileges.
Surprisingly, Oracle SQL Developer even does not need above privileges for generating AUTOTRACE. Oracle SQL Developer only needs some privileges to generate AUTOTRACE. You can request your DBA to grant following privileges for generating AUTOTRACE in Oracle SQL Developer
create role osd_plustrace;
grant select on v_$sesstat to osd_plustrace; grant select on v_$statname to osd_plustrace; grant select on v_$mystat to osd_plustrace; grant select on v_$session to osd_plustrace; grant select on v_$sql_plan to osd_plustrace; grant select on v_$sql_plan_statistics to osd_plustrace; grant select on v_$sql to osd_plustrace; grant osd_plustrace to dba with admin option;
grant osd_plustrace role to nimish;
Here, I have created "osd_plustrace" ROLE (where osd is Oracle SQL Developer) and have granted all the required privileges to "osd_plustrace" role. DBA can grant "osd_plustrace" ROLE later to any SCHEMA which needs to generate AUTOTRACE report from Oracle SQL Developer.
Last week, we were discussing our Statistics Gathering strategy for our data-warehouse system. and one interesting question was raised: What happens to table Statistics when we truncate a table in database? Are Statistics get deleted with truncate command or are they maintained?
Personally I was hoping that Statistics of the table should be deleted with Truncate command after all Truncate is DDL command, but I had the doubt too and I never tested it before. In my views, it is always better to run and test the scenario than living on assumptions. So here is what I did and found:
analytic_lat@bidatadv> select NUM_ROWS, SAMPLE_SIZE, to_char(LAST_ANALYZED,'dd-mon-yyyy hh24:mi:ss'), USER_STATS 2 from user_tables where table_name = 'EMP';
NUM_ROWS SAMPLE_SIZE TO_CHAR(LAST_ANALYZED,'DD- USE ---------- ----------- -------------------------- --- 14 14 31-mar-2017 02:17:08 NO
5. Statistics looks good in USER_TABLES. Now let's try to TRUNCATE the table and see what happens to Statistics.
nimish@garg> TRUNCATE TABLE EMP; Table truncated.
nimish@garg> select NUM_ROWS, SAMPLE_SIZE, to_char(LAST_ANALYZED,'dd-mon-yyyy hh24:mi:ss'), USER_STATS 2 from user_tables where table_name = 'EMP';
NUM_ROWS SAMPLE_SIZE TO_CHAR(LAST_ANALYZED,'DD- USE ---------- ----------- -------------------------- --- 14 14 31-mar-2017 02:17:08 NO
6. Surprisingly Table Statistics were not updated/removed even after TRUNCATE command. I was excepting this behavior with DELETE (DML) but not with TRUNCATE (DDL). Let's gather the Statistics of EMP table one final time
Elapsed: 00:00:00.96 nimish@garg> select NUM_ROWS, SAMPLE_SIZE, to_char(LAST_ANALYZED,'dd-mon-yyyy hh24:mi:ss'), USER_STATS 2 from user_tables where table_name = 'EMP';
NUM_ROWS SAMPLE_SIZE TO_CHAR(LAST_ANALYZED,'DD- USE ---------- ----------- -------------------------- --- 0 0 31-mar-2017 02:20:40 NO
Key Takeaways: 1. Oracle does not remove Table Statistics after TRUNCATE command 2. In starting, without gathering the Statistics, Table Statistics were NULL and now they are Zero (0). 3.Most Important One, always Run and Test your doubt, no matter how small. Don't live on assumptions :)
Read Full Article
Read for later
Articles marked as Favorite are saved for later viewing.
Scroll to Top
Separate tags by commas
To access this feature, please upgrade your account.