Loading...

Follow The Ivica Arsov Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Sometime ago we had an internal discuss at Pythian about how LGWR writes are handled on ASM diskgroup configured with NORMAL or HIGH redundancy. Does LGWR waits for write acknowledgment from the writes on primary and secondary (mirror) extents? Some people had divided opinions.

If you think about it, it’s natural to wait for the write operations to complete on primary and secondary extents, otherwise what’s the point of having redundancy in the first place.

Imagine what would happen if disk where the first write happend experience failure and the system already retrieved confirmation of the write without writing to all fail-groups.

Test case

I have 2 available partitioned disks /dev/sdd1 and /dev/sde1.

ll /dev/sd*
...
brw-rw---- 1 root disk 8, 48 May 10 08:47 /dev/sdd
brw-rw---- 1 root disk 8, 49 May 10 08:47 /dev/sdd1
brw-rw---- 1 root disk 8, 64 May 10 08:47 /dev/sde
brw-rw---- 1 root disk 8, 65 May 10 08:47 /dev/sde1

Let’s create device on top of /dev/sdd1 for which we will use dm_delay to delay writes for 5 seconds.

./delay_writes.sh /dev/sdd1 --create delay device with dm_delay

For IO delay I am using the delay option from dmsetup.

delay Delays reads and/or writes to different devices. Useful for testing

It offers a way to delay reads and/or writes against devices. More information you can find at https://www.kernel.org/doc/Documentation/device-mapper/delay.txt

The new device will be created in /dev/mapper directory.

ll /dev/mapper/d1-delay
lrwxrwxrwx 1 root root 7 May 10 08:48 /dev/mapper/d1-delay -> ../dm-8

Next is to create ASM disks. I am using asmlib libraries. One ASM disk against newly created device with delayed IO and the other one against /dev/sde1 device.

/usr/sbin/oracleasm createdisk TEST_D1_DELAYED /dev/mapper/d1-delay
Writing disk header: done
Instantiating disk: done

/usr/sbin/oracleasm createdisk TEST_D2_NORMAL /dev/sde1
Writing disk header: done
Instantiating disk: done

Create the ASM diskgroup.

SQL> create diskgroup DELAYED_IO_DG normal redundancy disk 'ORCL:TEST_D1_DELAYED', 'ORCL:TEST_D2_NORMAL';

Diskgroup created.

First let’s confirm log switch is fast.

SQL> set lines 200
col member for a60
select vl.group#, vl.status, vlf.member
from v$log vl, v$logfile vlf
where vl.group# = vlf.group#
order by vl.group#;

GROUP# STATUS       MEMBER
---------- ---------------- ------------------------------------------------------------
1 CURRENT       /oracle/app/oracle/oradata/oraee/redo01.log
2 INACTIVE      /oracle/app/oracle/oradata/oraee/redo02.log
3 INACTIVE      /oracle/app/oracle/oradata/oraee/redo03.log

Elapsed: 00:00:00.01
SQL>
SQL> alter system archive log current;

System altered.

Elapsed: 00:00:00.08
SQL>
SQL> r
1* alter system archive log current

System altered.

Elapsed: 00:00:00.05
SQL> r
1* alter system archive log current

System altered.

Elapsed: 00:00:00.46
SQL>

Add logfile group to the new diskgroup.

SQL> ALTER DATABASE ADD LOGFILE ('+DELAYED_IO_DG') SIZE 50M;

Database altered.

Elapsed: 00:02:23.26 --> TOOK LONG DUE TO DELAYED IO
SQL>

SQL> set lines 200
col member for a60
select vl.group#, vl.status, vlf.member
from v$log vl, v$logfile vlf
where vl.group# = vlf.group#
order by vl.group#;

GROUP# STATUS       MEMBER
---------- ---------------- ------------------------------------------------------------
1 CURRENT       /oracle/app/oracle/oradata/oraee/redo01.log
2 INACTIVE      /oracle/app/oracle/oradata/oraee/redo02.log
3 INACTIVE      /oracle/app/oracle/oradata/oraee/redo03.log
8 UNUSED        +DELAYED_IO_DG/oraee/onlinelog/group_8.256.975747739

Elapsed: 00:00:00.01
SQL>

Switch logfile for the new logfile group to become current.

SQL> alter system archive log current;

System altered.

Elapsed: 00:00:05.15 --> WRITES TO REDO LOGS LOCATED ON DISKGROUP WITH DELAYED IO

SQL> set lines 200
col member for a60
select vl.group#, vl.status, vlf.member
from v$log vl, v$logfile vlf
where vl.group# = vlf.group#
order by vl.group#;

GROUP# STATUS       MEMBER
---------- ---------------- ------------------------------------------------------------
1 ACTIVE        /oracle/app/oracle/oradata/oraee/redo01.log
2 INACTIVE      /oracle/app/oracle/oradata/oraee/redo02.log
3 INACTIVE      /oracle/app/oracle/oradata/oraee/redo03.log
8 CURRENT       +DELAYED_IO_DG/oraee/onlinelog/group_8.256.975747739

Elapsed: 00:00:00.00
SQL>

Next is to test COMMIT.

SQL>
SQL> create table t1(id number);

Table created.

SQL>
SQL> set timi on
SQL>

--CONFIRM CURRENT REDO LOG GROUP IS ON DISKGROUP WITH DELAYED IO
SQL> r
1 select vl.group#, vl.status, vlf.member
2 from v$log vl, v$logfile vlf
3 where vl.group# = vlf.group#
4* order by vl.group#

GROUP# STATUS MEMBER

1 INACTIVE /oracle/app/oracle/oradata/oraee/redo01.log 2 INACTIVE /oracle/app/oracle/oradata/oraee/redo02.log 3 INACTIVE /oracle/app/oracle/oradata/oraee/redo03.log 8 CURRENT +DELAYED_IO_DG/oraee/onlinelog/group_8.256.975747739 SQL> insert into t1 values (10); 1 row created. Elapsed: 00:00:00.00 SQL> commit; Commit complete. Elapsed: 00:00:05.98

As you can see COMMIT took almost 6 seconds to complete.

The post Oracle Log Writer (LGWR) Writes Behavior On ASM – NORMAL And HIGH Redundancy Disk Groups appeared first on Ivica Arsov: Database Performance Tuning | System Performance Troubleshooting | Database Management.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Recently I wrote a blog post for an Ansible module I wrote for Oracle patching. The blog post is published on Pythian’s blog.

You can read more at https://blog.pythian.com/automating-oracle-patching-with-an-ansible-module

The idea of the playbook and the module is to automate whole process when patching Oracle binaries with PSUs, BPs, RUs etc.

You can find the playbook and module on my GitHub page at https://github.com/iarsov/ansible-orapatch

README is available at https://github.com/iarsov/ansible-orapatch/blob/master/README.md

Expected actions performed by the module:

The module will identify what database instances, listeners and ASM instances are running.
The module will shutdown all listeners and database instances, only if the home from which services are running is being patched.
The module will start up all previously stopped services after it completes with patching*
The module will skip databases which are not in READ WRITE state**
The module will identify if a given database is in STANDBY or PRIMARY role***
The module always patches GI homes with opatchauto
The module always patches DB homes with opatch
The module will make multiple restarts of the databases and listeners during the process

*Assuming no error occurred and module did not fail during the patching process.
** Even if the databases are specified for patching
*** Databases in STANDBY role are not patched

The post Automating Oracle Patching With Ansible appeared first on The Ivica Arsov Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Next week the UKOUG Tech 2018 annual conference will take place in Liverpool, UK.

I will present two presentations:

Migrating your DB to the Oracle Cloud – Exploring the Options , together with Simon Pane
Details: Systems – 7, 05/12/2018, 14:25 – 15:10

Tracing Parallel Execution
Details: Database 1 – 1A, 05/12/2018, 11:40 – 12:25

Regarding the rest of the week, my session choices are:

Monday, 3rd December

Time 09:00
The Optimizer & the Road to the Latest Generation of the Oracle Database , Nigel Bayliss – Oracle

Time 11:20
Raw Analysis of an Block , Gabriel Alonso – Accenture

Time 11:50
Test Drive Automatic Index Creation in Oracle Autonomous Database , Maria Colgan – Oracle

Time 13:35
Struggling with Statistics , Jonathan Lewis – JL Computer Consultancy

Time 14:30
What’s New in the Oracle Database and What’s Coming… , Dominic Giles & Maria Colgan – Oracle

Time 16:15
How Autonomous is the Oracle Autonomous Data Warehouse? , Christian Antognini – Trivadis

Time 17:10
JavaScript for Database Developers , Andrew Clarke – Result Set

Tuesday, 4th December

Time 09:00
The Basics of Understanding Execution Plans , Jonathan Lewis – JL Computer Consultancy

Time 09:55
Using Machine Learning to Debug Oracle RAC Issues , Anil Nair – Oracle

Time 12:35
Identifying Performance Problems in a Multitenant Environment , Christian Antognini – Trivadis

Time 14:25
Single row vs the Array Interface vs Parallelism , Mark Ashdown – Oracle

Time 15:20
Data Guard Roundtable , Joel Goodman – Oracle

Time 16:35
Using JSON in Oracle , Neil Chandler – Chandler Systems

Wednesday, 5th December

Time 09:00
Successful Star Schemas , Michael Hallas – Oracle

Time 09:55
RAC & Grid Infrastructure Roundtable , Joel Goodman – Oracle

Time 11:40
Tracing Parallel Execution , Ivica Arsov – Pythian (presenting)

Time 12:35
Advanced RAC Programming Features , Martin Bach – Accenture Enkitec Group

Time 14:25
Migrating your DB to the Oracle Cloud – Exploring the Options , Simon Pane – Pythian (presenting)

Time 15:20
Big Data SQL as Exadata for Hadoop. Oracle Power on top of Your Big Data Solutions , Gleb Otochkin – Pythian

The post UKOUG Tech 2018 – Liverpool, UK appeared first on The Ivica Arsov Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is a short blog post to give you an idea how you can define your own custom date and time format for “ls -l” command in ASMCMD. If you use ASMCMD quite often, you might have been (at least once) in a situation where you wanted to format the date and time of “ls -l” command, but you did not find a way to do it.

If you did not know, in ASMCMD utility the date and time format for “ls -l” command is hardcoded. Possible formats are “MON DD YYYY” or “MON DD HH24:MI:SS”, based on file age. With RMAN we can use NLS_DATE_FORMAT environment variable to define our desired format, but that’s not the case with ASMCMD.

ASMCMD is a Perl utility that provides navigation of directories/files within ASM disk groups. Its modules are stored in $CRS_HOME/lib directory.
ASMCMD utility base module is asmcmdbase.pm

The subroutine for listing files located in the base module is asmcmdbase_ls_process_file. You can see the logic how file dates are printed. We can notice that if the file is older than 6 months the format used is “MON DD YYYY”. If the file is newer, the format used is “MON DD HH24:MI:SS”. This subroutine is called (executed) for each file that needs to be printed in ASMCMD.
List entries $entry_info_ref->{‘mod_date’} and $entry_info_ref->{‘mod_time’} contain the data for both hardcoded formats. MOD_DATE if the files is older than 6 months and MOD_TIME if the file is newer.

...

sub asmcmdbase_ls_process_file
{
my ($dbh, $entry_info_ref, $args_ref, $cur_date, $file_info) = @_;

my ($related_alias); # The user or system alias for its corresponding #
# system or user alias, respectively. #

# Calculate dates only if we have file info from v$asm_file.
if ($file_info)
{
# Use separate date format if date older than half a year.
if (($cur_date - $entry_info_ref->{'julian_date'}) >= $ASMCMDBASE_SIXMONTH)
{ # If older than six months, use 'MON DD YYYY' format. #
$entry_info_ref->{'date_print'} = $entry_info_ref->{'mod_date'};
}
else
{ # Otherwise, use 'MON DD HH24:MI:SS' format. #
$entry_info_ref->{'date_print'} = $entry_info_ref->{'mod_time'};
}
}

...

In order to understand how $entry_info_ref->{‘mod_date’} and $entry_info_ref->{‘mod_date’} are defined we need to look at asmcmdshare.pm module which is the ASM command line interface (Shared Functionality Module).

Within this module, in subroutine asmcmdshare_get_file we can find the actual query statement used to get file information from V$ASM_FILE. We see that date format is hardcoded within the select statement.

Control ASMCMD date time format with environment variable.

By now, you’ve probably figure it out what you need to do in order to get your custom format in place. E.g. if you want to control ASMCMD date format with OS environment variable you just need very few lines of code.

Let’s assume we want to control the format with NLS_DATE_FORMAT_XX environemnt variable.

  1. Add condition with default format to be used if environment variable is not set. In our case that would be ‘DD.MM.YYYY HH24:MI:SS’
  2. Add additional column in the query to get the date formatted based on the environment variable definition.
...

if(!$ENV{'NLS_DATE_FORMAT_XX'}){ $ENV{'NLS_DATE_FORMAT_XX'} = "dd.mm.yyyy hh24:mi:ss"; }

$qry = 'select group_number, file_number, incarnation, block_size, ' .
'blocks, bytes, space, type, redundancy, striped, creation_date, ' .
'user_number, usergroup_number, permissions, ' .
'to_char(modification_date, \'MON DD HH24:MI:SS\') "MOD_TIME", ' .
'to_char(modification_date, \''.$ENV{'NLS_DATE_FORMAT_XX'}.'\') "MOD_TIME_XX", ' .
'to_char(modification_date, \'MON DD YYYY\') "MOD_DATE", ' .
'to_char(modification_date, \'J HH24 MI SS\') "JULIAN_TIME", ' .
'to_char(modification_date, \'J\') "JULIAN_DATE" ' .
'from v$asm_file where compound_index = ? ORDER BY ' .
'GROUP_NUMBER, FILE_NUMBER, USER_NUMBER, USERGROUP_NUMBER '
;

...

Below the select statement you need to fetch the column into $file_info_ref under your own key.

...

$file_info_ref->{'bytes'} = $row->{'BYTES'};
$file_info_ref->{'space'} = $row->{'SPACE'};
$file_info_ref->{'type'} = $row->{'TYPE'};
$file_info_ref->{'redundancy'} = $row->{'REDUNDANCY'};
$file_info_ref->{'striped'} = $row->{'STRIPED'};
$file_info_ref->{'creation_date'} = $row->{'CREATION_DATE'};
$file_info_ref->{'mod_time'} = $row->{'MOD_TIME'};
$file_info_ref->{'mod_time_xx'} = $row->{'MOD_TIME_XX'};
$file_info_ref->{'julian_time'} = $row->{'JULIAN_TIME'};
$file_info_ref->{'julian_date'} = $row->{'JULIAN_DATE'};
$file_info_ref->{'user_number'} = $row->{'USER_NUMBER'};

...

As final step, in the main ASMCMD module, subroutine asmcmdbase_ls_process_file you need to overwrite $entry_info_ref->{‘date_print’} to be populated with previously defined custom key which holds the data formatted as per ‘NLS_DATE_FORMAT_XX’.

...

# Calculate dates only if we have file info from v$asm_file.
if ($file_info)
{
# Use separate date format if date older than half a year.
if (($cur_date - $entry_info_ref->{'julian_date'}) >= $ASMCMDBASE_SIXMONTH)
{ # If older than six months, use 'MON DD YYYY' format. #
$entry_info_ref->{'date_print'} = $entry_info_ref->{'mod_date'};
}
else
{ # Otherwise, use 'MON DD HH24:MI:SS' format. #
$entry_info_ref->{'date_print'} = $entry_info_ref->{'mod_time'};
}

# Overwrite date_print
entry_info_ref->{'date_print'} = $entry_info_ref->{'mod_time_xx'};

}

...

Now you can just do the following.

[oracle@host01 lib]$ export NLS_DATE_FORMAT_XX="yyyy.mm.dd hh24:mi:ss"
[oracle@host01 lib]$
[oracle@host01 lib]$ asmcmd
ASMCMD> ls -l data/orcl/spfileorcl.ora
Type Redund Striped Time Sys Name
PARAMETERFILE UNPROT COARSE 2018.03.09 02:00:00 N spfileorcl.ora => +DATA/ORCL/PARAMETERFILE/spfile.267.965812191
ASMCMD>
ASMCMD>
ASMCMD> exit
[oracle@host01 lib]$
[oracle@host01 lib]$ export NLS_DATE_FORMAT_XX="dd.mm.yyyy hh24:mi:ss"
[oracle@host01 lib]$
[oracle@host01 lib]$ asmcmd
ASMCMD>
ASMCMD> ls -l data/orcl/spfileorcl.ora
Type Redund Striped Time Sys Name
PARAMETERFILE UNPROT COARSE 09.03.2018 02:00:00 N spfileorcl.ora => +DATA/ORCL/PARAMETERFILE/spfile.267.965812191

Use it at your own risk.

The post Date and Time Format in ASMCMD for File Listing appeared first on The Ivica Arsov Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As last year, this year is also expected to be busy.

Below is the list of conferences were you can (potentially) meet me, drink beer and have fun. I plan to attend to all of the conferences listed.
The green (bold) color indicates my abstracts have been selected.

If you’re interested more about the talks and conference details check Professional Activities page.

May – UKOUG Northern Technology Summit, Leeds – UK
June – AOUG, Vienna – Austria
June – NLOUG, Amersfoort – Netherlands
September – POUG, Sopot – Poland
October – SIOUG, Portoroz – Slovenia
October – HROUG, Rovinj – Croatia
October – Oracle Open World, San Francisco – USA
November – BGOUG, Bulgaria
December – UKOUG, Liverpool – UK

The post Public Appearances 2018 appeared first on The Ivica Arsov Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The Ivica Arsov Blog by Ivica Arsov - 10M ago

Next week I will attend the UKOUG conference in Birmingham, UK for the first time. It will be my first time to attend this conference and also in UK. I have a 115 minute (masterclass) presentation regarding “Parallel Execution With Oracle Database 12c”. The presentation starts at 11:35 on Monday, December 4. I will touch various areas where parallel execution can be used. I expect interesting discussion and looking forward seeing you at the conference.

This is my planned agenda for the talk:

  • Parallel execution basics
  • Parallel execution administration
  • Distribution methods (including bloom filters)
  • Auto DoP
  • In-memory parallel execution
  • Blocking and non-blocking operations
  • Tracing parallel execution

More about the presentation abstract you can find here.

Also, there will be quite a few people attending the conference from Pythian. You can find more details of other Pythian sessions at https://blog.pythian.com/join-pythian-ukoug-2017

See you next week.

The post UKOUG Tech17 appeared first on The Ivica Arsov Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Today, I’ve encountered an interesting bug for what I believe it’s related to TEMP TABLE TRANSFORMATION operation.

I was able to reproduce this on 11.2.0.3, 11.2.0.4 and 12.1.0.2, but not on 12.2.0.1 (looks like it’s been fixed).
I didn’t find any related MOS note/bug regarding this issue, if you’re aware of any please let me know.

Let’s observe the following statement.

create table t as select * from all_objects;

with subq as
(select /*+ materialize */ t1.* from t t1, t t2, t t3)
select subq.* from subq;

During statement execution if you press CTRL-C on the keyboard to stop (break) the execution it it will report “ORA-01013: user requested cancel of current operation”, which is fine. However, the bug is that this error is written in the alert.log and trace file is generated.

*** 2017-04-13 15:45:20.208
ORA-01013: user requested cancel of current operation
Dump of memory from 0x0000000152981E18 to 0x0000000152981E74
152981E10                   68746977 62757320          [with sub]
152981E20 73612071 6573280A 7463656C 2B2A2F20  [q as.(select /*+]
152981E30 74616D20 61697265 657A696C 202F2A20  [ materialize */ ]
152981E40 2A2E3174 6F726620 2074206D 202C3174  [t1.* from t t1, ]
152981E50 32742074 2074202C 0A293374 656C6573  [t t2, t t3).sele]
152981E60 73207463 2E716275 7266202A 73206D6F  [ct subq.* from s]
152981E70 00716275                             [ubq.]

This should happen for every error if one occurs during execution of the query.

The problem looks like to be at TEMP TABLE TRANSFORMATION operation. Probably it’s some code defect where an exception is missing and every error is traced. Note, the error has to occur during query execution, not fetching phase.

--------------------------------------------------------------------------------------------------------
| Id  | Operation		   | Name		       | Rows  | Bytes | Cost (%CPU)| Time     					|
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	   		|			       			|  1016T|   142P|  9320G  (1)|999:59:59 |
|   1 |  TEMP TABLE TRANSFORMATION  |			       			|       |       |	    	 |	        |
|   2 |   LOAD AS SELECT	   		| SYS_TEMP_0FD9D6615_4AAABB |       |       |	    	 |	        |
|   3 |    MERGE JOIN CARTESIAN    	|			       			|  1016T|   142P|  3402G  (1)|999:59:59 |
|   4 |     MERGE JOIN CARTESIAN   	|			       			|    10G|  1487G|    33M  (1)|112:47:16 |
|   5 |      TABLE ACCESS FULL	   	| T			       			|   100K|    15M|    338  (1)| 00:00:05 |
|   6 |      BUFFER SORT	   		|			       			|   100K|       |    33M  (1)|112:47:12 |
|   7 |       TABLE ACCESS FULL    	| T			       			|   100K|       |    336  (0)| 00:00:05 |
|   8 |     BUFFER SORT 	   		|			       			|   100K|       |  3402G  (1)|999:59:59 |
|   9 |      TABLE ACCESS FULL	   	| T			       			|   100K|       |    336  (0)| 00:00:05 |
|  10 |   VIEW			   			|			       			|  1016T|   142P|  5917G  (1)|999:59:59 |
|  11 |    TABLE ACCESS FULL	   	| SYS_TEMP_0FD9D6615_4AAABB |  1016T|   142P|  5917G  (1)|999:59:59 |
--------------------------------------------------------------------------------------------------------

In my case, the monitoring system raised an alert for “ORA-01722: invalid number” in the alert.log file.

The post TEMP Table Transformation Errors appeared first on The Ivica Arsov Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Oracle Call Interface (OCI) is a set of APIs which provides interaction with an Oracle database. It supports all phases of a SQL statement execution.

If you ever wondered how to trace OCI function calls you can do it by setting EVENT_10842 environment variable.

EVENT_10842="server=<>;user=<>;stmt=<>;level=<>"

Further you can have the trace files generated in a specific directory by setting ORA_CLIENTTRACE_DIR=directory_path.
For ORA_CLIENTTRACE_DIR to take effect Automatic Diagnostic Repository (ADR) has to be disabled. You can disable it by setting DIAG_ADR_ENABLED=off in your sqlnet.ora.

A tracefile is generated for each connection in the format of ora_skgu_<pid>.trc1 where pid is the process id of the connection on the (client) system.

I will demonstrate an example trace of a DML statement, SELECT statement and one statement which throws an error. The OCI function calls varies, depending on the statement that’s executed. For example if we have SELECT statement we should see OCI fetch calls.

# 04/09/17 23:27:59:615 # Thread ID 130486272 # Entry - OCIHandleAlloc(parenth = 0x7fde708b6e00, hndlp = 0x7fde708c2160, htype = service hndl, xtramem_sz =0, usermemp = 0x0, flag = TRUE);
# 04/09/17 23:27:59:615 # Thread ID 130486272 # Exit - OCIHandleAlloc
# 04/09/17 23:27:59:695 # Thread ID 130486272 # Exit - OCIAttrSet
# 04/09/17 23:27:59:695 # Thread ID 130486272 # Entry - OCIAttrSet(trgthndlp = 0x7fde7119f380, trghndltyp = 9, attributep = 0x0, size = 5, errhp = 0x7fde702f5a10);
# 04/09/17 23:27:59:696 # Thread ID 130486272 # Exit - OCIAttrSet
# 04/09/17 23:27:59:696 # Thread ID 130486272 # Entry - OCIAttrSet(trgthndlp = 0x7fde7119f380, trghndltyp = 9, attributep = 0x7fde6f553f70, size = 7, errhp = 0x7fde702f5a10);
# 04/09/17 23:27:59:696 # Thread ID 130486272 # Exit - OCIAttrSet
# 04/09/17 23:27:59:697 # Thread ID 130486272 # Entry - OCISessionBegin(svchp = 0x7fde702f5928, errhp = 0x7fde702f5a10, usrhp = 0x7fde7119f380,  : user = ARSOV, scredt = OCI_CRED_RDBMS, mode = OCI_DEFAULT(000000000));
# 04/09/17 23:27:59:723 # Thread ID 130486272 # Exit - OCISessionBegin

At the beginning of the trace file we have the OCIHandleAlloc – function which returns the pointer to the allocated handle.
The type of the handle is htype = service hndl which stands for Service Handle. Under the service context handle there are additional three handles allocated:

– Server handle: establish the physical database connection
– User session handle: defines the user’s domain (privileges, roles etc.)
– Transaction handle: context in which statements are executed, PL/SQL packages initialized, fetch state etc.

The DML statement I’m using in this example is inserting a row in two columns table.

INSERT INTO t" + "(N1, V1) VALUES" + "(?,?)

For the insert (DML) statement another htype = statement hndl handle is allocated.
You can also see the statement executed, it’s shown at OCIStmtPrepare and OCIStmtExecute entries.

# 04/09/17 23:27:59:901 # Thread ID 130486272 # Entry - OCIHandleAlloc(parenth = 0x7fde708b6e00, hndlp = 0x700007c704b0, htype = statement hndl, xtramem_sz =1000, usermemp = 0x700007c70538, flag = TRUE);
# 04/09/17 23:27:59:901 # Thread ID 130486272 # Exit - OCIHandleAlloc
# 04/09/17 23:27:59:903 # Thread ID 130486272 # Entry - OCIStmtPrepare(stmtp = 0x7fde709e43d8, errhp = 0x7fde702f5a10, stmt = INSERT INTO t23(N1, V1) VALUES(:1 ,:2 ), stmt_len = 39, language = OCI_NTV_SYNTAX,  mode = OCI_DEFAULT(000000000));
# 04/09/17 23:27:59:904 # Thread ID 130486272 # Exit - OCIStmtPrepare
# 04/09/17 23:27:59:904 # Thread ID 130486272 # Entry - OCIAttrGet(trgthndlp = 0x7fde709e43d8, trghndltyp = 4, attributep = 0x7fde709e4ac0, sizep = 0x0, attrtype = OCI_ATTR_STMT_TYPE, errhp = 0x7fde702f5a10);
# 04/09/17 23:27:59:904 # Thread ID 130486272 # Exit - OCIAttrGet
# 04/09/17 23:27:59:904 # Thread ID 130486272 # Exit - OCIAttrSet
# 04/09/17 23:27:59:904 # Thread ID 130486272 # Entry - OCIStmtExecute(svchp = 0x7fde702f5928, stmhp = 0x7fde709e43d8, errhp = 0x7fde702f5a10, iters = 1, rowoff = 0, snap_in = 0x0, snap_out = 0x0, mode = OCI_DEFAULT(000000000)for sql :
INSERT INTO t23(N1, V1) VALUES(:1 ,:2 )
# 04/09/17 23:27:59:907 # Thread ID 130486272 # Exit - OCIStmtExecute
# 04/09/17 23:27:59:907 # Thread ID 130486272 # Entry - OCIAttrGet(trgthndlp = 0x7fde709e43d8, trghndltyp = 4, attributep = 0x700007c70184, sizep = 0x0, attrtype = OCI_ATTR_ROW_COUNT, errhp = 0x7fde702f5a10);
# 04/09/17 23:27:59:907 # Thread ID 130486272 # Exit - OCIAttrGet
# 04/09/17 23:27:59:908 # Thread ID 130486272 # Entry - OCITransCommit(svchp = 0x7fde702f5928, errhp = 0x7fde702f5a10, flags = OCI_DEFAULT);
# 04/09/17 23:27:59:910 # Thread ID 130486272 # Exit - OCITransCommit
# 04/09/17 23:27:59:911 # Thread ID 130486272 # Entry - OCIHandleFree(hndlp = 0x7fde709e43d8, type = statement hndl);
# 04/09/17 23:27:59:911 # Thread ID 130486272 # Exit - OCIHandleFree
# 04/09/17 23:27:59:911 # Thread ID 130486272 # Entry - OCISessionEnd(svchp = 0x7fde702f5928, errhp = 0x7fde702f5a10, usrhp = 0x7fde7119f380,  : user=******,mode = OCI_DEFAULT(000000000)# 04/09/17 23:27:59:914 # Thread ID 130486272 # Exit - OCISessionEnd
# 04/09/17 23:27:59:915 # Thread ID 130486272 # Entry - OCIServerDetach(srvhp = 0x7fde701bfc60, errhp = 0x7fde702f5a10, mode = OCI_DEFAULT);
(000000000)# 04/09/17 23:27:59:916 # Thread ID 130486272 # Exit - OCIServerDetach
# 04/09/17 23:27:59:916 # Thread ID 130486272 # Entry - OCIHandleFree(hndlp = 0x7fde701bfc60, type = server hndl);
# 04/09/17 23:27:59:917 # Thread ID 130486272 # Exit - OCIHandleFree
# 04/09/17 23:27:59:917 # Thread ID 130486272 # Entry - OCIHandleFree(hndlp = 0x7fde7119f380, type = authentication hndl);
# 04/09/17 23:27:59:917 # Thread ID 130486272 # Exit - OCIHandleFree
# 04/09/17 23:27:59:917 # Thread ID 130486272 # Entry - OCIHandleFree(hndlp = 0x7fde702f5928, type = service hndl);
# 04/09/17 23:27:59:917 # Thread ID 130486272 # Exit - OCIHandleFree
# 04/09/17 23:27:59:917 # Thread ID 130486272 # Entry - OCIHandleFree(hndlp = 0x7fde702f5a10, type = error hndl);
# 04/09/17 23:27:59:917 # Thread ID 130486272 # Exit - OCIHandleFree
# 04/09/17 23:27:59:917 # Thread ID 130486272 # Entry - OCIHandleFree(hndlp = 0x7fde708b6e00, type = environment hndl);

The first step is for the statement to be prepared for execution, this step is executed under OCIStmtPrepare function call.
stmt = INSERT INTO t23(N1, V1) VALUES(:1 ,:2 ) – the actual statement
stmt_len = 39 – the length of the statement
language = OCI_NTV_SYNTAX – the language type for syntax check

On line 5, OCIAttrGet is called to identify the statement type via the attribute OCI_ATTR_STMT_TYPE, in this case it’s OCI_STMT_INSERT.

The statement is executed via OCIStmtExecute, line 8.
After execution, the OCI_ATTR_ROW_COUNT attribute is retrieved which stands for “Number of rows successfully converted in the last call to”. The same see with SQL%ROWCOUNT.

The transaction is committed with OCITransCommit function call. In this case the default flags = OCI_DEFAULT is used, for one-phase commit since the transaction is non-distributed.

After the statement is executed several OCI function calls are called (OCISessionEnd and OCIServerDetach) to end the session and detach the connection. Followed by several OCIHandleFree calls to clean several previously allocated handles.

Also for each connection there is an error handle allocated which is used to record information if an error occurs. Let’s simulate this with an insert statement which will fail with ORA-01722: invalid number

# 04/10/17 18:34:51:187 # Thread ID 3292144576 # Entry - OCIErrorGet(hndlp = 0x7fe202010c10, recordno = 1, sqlstate = NULL, errcodep = 0x7fff5ac20c08 = 0, bufsiz = 2048, type = OCI_HTYPE_ERROR);
# 04/10/17 18:34:51:187 # Thread ID 3292144576 # Exit - OCIErrorGet
# 04/10/17 18:34:51:188 # Thread ID 3292144576 # Entry - OCIErrorGet(hndlp = 0x7fe202010c10, recordno = 2, sqlstate = NULL, errcodep = 0x7fff5ac20c08 = 1722, bufsiz = 2048, type = OCI_HTYPE_ERROR);
# 04/10/17 18:34:51:188 # Thread ID 3292144576 # Exit - OCIErrorGet

The error code is reported via OCIErrorGet function call, in this case errcodep = 1722.
The statement which was executed:

SQL&gt; insert into t  values   ('iarsov','test');
insert into t  values   ('iarsov','test')
                           *
ERROR at line 1:
ORA-01722: invalid number

  
For a SELECT statement there are multiple OCIStmtFetch calls, depending on the array size. I’ve used default array size of 15 and which is noted in nrows.

# 04/10/17 18:51:46:868 # Thread ID 3292144576 # Entry - OCIStmtPrepare(stmtp = 0x7f98e38058f8, errhp = 0x7f98e2853210, stmt = select * from t, stmt_len = 17, language = OCI_NTV_SYNTAX,  mode = 32(0x0000020));
# 04/10/17 18:51:46:868 # Thread ID 3292144576 # Exit - OCIStmtPrepare
# 04/10/17 18:51:46:868 # Thread ID 3292144576 # Exit - OCIStmtPrepare2(stmhp = 0x7f98e38058f8);
# 04/10/17 18:51:46:868 # Thread ID 3292144576 # Exit - OCIStmtGetBindInfo
# 04/10/17 18:51:46:868 # Thread ID 3292144576 # Entry - OCIStmtExecute(svchp = 0x7f98e2854d28, stmhp = 0x7f98e38058f8, errhp = 0x7f98e2853210, iters = 0, rowoff = 0, snap_in = 0x0, snap_out = 0x0, mode = OCI_DEFAULT(000000000)for sql :
select * from t23
# 04/10/17 18:51:46:883 # Thread ID 3292144576 # Exit - OCIStmtExecute
# 04/10/17 18:51:46:884 # Thread ID 3292144576 # Entry - OCIDescriptorAlloc(parenth = 0x7f98e38058f8, descpp = 0x7fff52a1d4f8, trc_type = 53 = parameter descriptor, xtramem_sz = 0, usrmempp = 0x0);
# 04/10/17 18:51:46:884 # Thread ID 3292144576 # Exit - OCIDescriptorAlloc
# 04/10/17 18:51:46:885 # Thread ID 3292144576 # Entry - OCIDescriptorAlloc(parenth = 0x7f98e38058f8, descpp = 0x7fff52a1d4f8, trc_type = 53 = parameter descriptor, xtramem_sz = 0, usrmempp = 0x0);
# 04/10/17 18:51:46:885 # Thread ID 3292144576 # Exit - OCIDescriptorAlloc
# 04/10/17 18:51:46:886 # Thread ID 3292144576 # Entry - OCIStmtFetch(stmhp = 0x7f98e38058f8, errhp = 0x7f98e2853210, nrows = 15, orientation = OCI_FETCH_NEXT, mode = OCI_DEFAULT(000000000)for sql :
select * from t
));
# 04/10/17 18:51:46:887 # Thread ID 3292144576 # Exit - OCIStmtFetch
# 04/10/17 18:51:46:889 # Thread ID 3292144576 # Entry - OCIStmtFetch(stmhp = 0x7f98e38058f8, errhp = 0x7f98e2853210, nrows = 15, orientation = OCI_FETCH_NEXT, mode = OCI_DEFAULT(000000000)for sql :
select * from t
));
# 04/10/17 18:51:46:891 # Thread ID 3292144576 # Exit - OCIStmtFetch
# 04/10/17 18:51:46:891 # Thread ID 3292144576 # Entry - OCIStmtFetch(stmhp = 0x7f98e38058f8, errhp = 0x7f98e2853210, nrows = 15, orientation = OCI_FETCH_NEXT, mode = OCI_DEFAULT(000000000)for sql :
select * from t
));
# 04/10/17 18:51:46:891 # Thread ID 3292144576 # Exit - OCIStmtFetch
# 04/10/17 18:51:46:891 # Thread ID 3292144576 # Entry - OCIStmtRelease(stmtp = 0x7f98e38058f8, errhp = 0x7f98e2853210, key = , key_len = 0, mode = OCI_DEFAULT# 04/10/17 18:51:46:891 # Thread ID 3292144576 # Exit - OCIStmtRelease
# 04/10/17 18:51:46:891 # Thread ID 3292144576 # Entry - OCIHandleFree(hndlp = 0x7f98e38058f8, type = statement hndl);

 
It’s not likely that you will need to perform OCI calls tracing on a daily basis.
However, it can be useful in troubleshooting some issues at OCI call level.

The post Oracle Call Interface (OCI) Tracing appeared first on The Ivica Arsov Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Few weeks ago I’ve published a post on Pythian’s blog regarding a scenario where dropping a potentially ‘unused’ index can have a negative influence on the optimizer’s cardinality estimation.

I will post here the introduction only, you can find link to the full post at the bottom.

THE SIDE EFFECTS OF DROP ‘UNUSED’ INDEX

In this blog post I’ll demonstrate a scenario where dropping a potentially ‘unused’ index can have a negative influence on the optimizer’s cardinality estimation. Having columns with logical correlation and skewed data can cause difficulties for the optimizer when calculating cardinality. This post will also address what we can do to help the optimizer with better cardinality estimates.

The inspiration for this post was derived from a recent index usage review. One of the requirements was to determine how to confirm which unused indexes qualify to be dropped. I decided to do some tests regarding extended statistics and the effect of potentially dropping an ‘unused’ index. You will observe what kind of result may be seen from the drop of an index which has not been used. It’s important to remember that it does not apply in all cases. Occasionally, even if the index is used, it doesn’t mean that it’s needed.

This is more or less linked to columns with skewed data and which might have logical relationship.
Hopefully, it can help you answer some of the following questions:

Is the optimizer using the indexes behind scenes?
While there are methods to determine if an index has been used in an execution plan, can an index be dropped on this basis only?
If we drop composite index (constructed from correlated columns), can we do anything to avoid performance degradation?
Before we start with the use case, let’s briefly review some concepts.

The basic formula for selectivity is 1/NDV. The cardinality (CDN) is calculated as selectivity * total number of rows.

The selectivity of a join is defined as the selectivity of the most selective join column adjusted by the proportion of not null values in each join column.

Join Selectivity:
Sel = 1/max[NDV(t1.c1),NDV(t2.c2)] *
( (Card t1 - # t1.c1 NULLs) / Card t1) *
( (Card t2 - # t2.c2 NULLs) / Card t2)

Join Cardinality:
Card(Pj) = Card(T1) * Card(T2) * Sel(Pj)

In Oracle’s Doc ID 68992.1 you can find a more detailed explanation about different selectivity calculations based on different predicates. For simplicity, I will use equality predicate.

This blog post is divided in three sections.

Use case where we demonstrate how drop of an “unused” index can mess up optimizer cardinality calculation.
How to help optimizer for better cardinality estimation with extended statistics.
More explanation on column correlation (CorStregth).

The full post you can read here.

The post The Side Effects Of Drop ‘UNUSED’ Index appeared first on The Ivica Arsov Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

ROWNUM is a pseudocolumn which is used to uniquely assign numbers (1,2,3,4,5…) to result rows as they are retrieved. In this blog post I will demonstrate a situation where using a ROWNUM filter on a table which has lot of empty blocks can negatively impact on the execution time.
The root cause is not the ROWNUM itself, however lot of people are surprised when using a filter simple as ROWNUM

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview