Loading...

Follow Zed DBA's Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

You may want to move an Oracle database from one location to another on a Windows Server, for example one drive to another.  But it’s important to keep all the file permissions preserved, so Oracle can access.

To do this, is simple using xcopy with certain switches as explained in this Microsoft article:

https://support.microsoft.com/en-gb/help/323007/how-to-copy-a-folder-to-another-folder-and-retain-its-permissions

Below is where I needed to move oradata from u: drive to g: drive:

Microsoft Windows [Version 6.3.9600]
(c) 2013 Microsoft Corporation. All rights reserved.

C:\Windows\system32>xcopy u:\oradata g:\oradata /O /X /E /H /K
Does G:\oradata specify a file name
or directory name on the target
(F = file, D = directory)? d
U:\oradata\PEPM\CONTROLFILE\O1_MF_FD9ZRZBP_.CTL
U:\oradata\PEPM\DATAFILE\O1_MF_PEPM_ST_FDW1GL8P_.DBF
U:\oradata\PEPM\DATAFILE\O1_MF_PEPM_TS_FDW1GX6J_.DBF
U:\oradata\PEPM\DATAFILE\O1_MF_PERFSTAT_FDWHDK7L_.DBF
U:\oradata\PEPM\DATAFILE\O1_MF_SYSAUX_FD9ZHRHO_.DBF
U:\oradata\PEPM\DATAFILE\O1_MF_SYSTEM_FD9ZL3SK_.DBF
U:\oradata\PEPM\DATAFILE\O1_MF_TEMP_FD9ZST99_.TMP
U:\oradata\PEPM\DATAFILE\O1_MF_UNDOTBS1_FD9ZO4DD_.DBF
U:\oradata\PEPM\DATAFILE\O1_MF_USERS_FD9ZO361_.DBF
U:\oradata\PEPM\ONLINELOG\O1_MF_1_FD9ZS2RH_.LOG
U:\oradata\PEPM\ONLINELOG\O1_MF_2_FD9ZS9P3_.LOG
U:\oradata\PEPM\ONLINELOG\O1_MF_3_FD9ZSJDT_.LOG
12 File(s) copied

C:\Windows\system32>

And fast_recovery_area from v: drive to i: drive:

Microsoft Windows [Version 6.3.9600]
(c) 2013 Microsoft Corporation. All rights reserved.

C:\Windows\system32>xcopy v:\fast_recovery_area i:\fast_recovery_area /O /X /E /H /K
Does I:\fast_recovery_area specify a file name
or directory name on the target
(F = file, D = directory)? d
V:\fast_recovery_area\PEPM\CONTROLFILE\O1_MF_FD9ZRZGM_.CTL
V:\fast_recovery_area\PEPM\ONLINELOG\O1_MF_1_FD9ZS67G_.LOG
V:\fast_recovery_area\PEPM\ONLINELOG\O1_MF_2_FD9ZSDYD_.LOG
V:\fast_recovery_area\PEPM\ONLINELOG\O1_MF_3_FD9ZSMS6_.LOG
4 File(s) copied

C:\Windows\system32>

Without using this method, you would have to manually set the permissions, which is time consuming and error prone.

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When a database starts to churn more redo then normal, it is handy to be able to extract the amount of redo over time, to be able to plot this in a graph.  The below allows you to extract this info

Query for redo generation

Query to obtain the amount of redo generation over time by hour and MB:

set pages 999 lines 400
select to_char(trunc(first_time, 'HH24'), 'DD/MM/YYYY HH24:MI:SS') date_by_hour, sum(round(blocks*block_size/1024/1024)) CHURN_IN_MB
from v$archived_log
group by trunc(first_time, 'HH24')
order by 1
/
Output

This is the output you will get from the query:

SQL> set pages 999 lines 400
SQL> select to_char(trunc(first_time, 'HH24'), 'DD/MM/YYYY HH24:MI:SS') date_by_hour, sum(round(blocks*block_size/1024/1024)) CHURN_IN_MB
2 from v$archived_log
3 group by trunc(first_time, 'HH24')
4 order by 1
5 /

DATE_BY_HOUR CHURN_IN_MB
------------------- -----------
07/12/2018 10:00:00 2
07/12/2018 11:00:00 51
07/12/2018 12:00:00 3731
07/12/2018 13:00:00 10857
07/12/2018 14:00:00 12505
07/12/2018 15:00:00 17493
07/12/2018 16:00:00 187
07/12/2018 17:00:00 173
07/12/2018 18:00:00 185
07/12/2018 19:00:00 137
07/12/2018 20:00:00 159
07/12/2018 21:00:00 155
07/12/2018 22:00:00 157
07/12/2018 23:00:00 183
08/12/2018 00:00:00 154
08/12/2018 01:00:00 184
08/12/2018 02:00:00 179
08/12/2018 03:00:00 179
08/12/2018 04:00:00 172
08/12/2018 05:00:00 177
08/12/2018 06:00:00 174
08/12/2018 07:00:00 172
08/12/2018 08:00:00 177
08/12/2018 09:00:00 175
08/12/2018 10:00:00 175
08/12/2018 11:00:00 220
08/12/2018 12:00:00 221
08/12/2018 13:00:00 218
08/12/2018 14:00:00 216
08/12/2018 15:00:00 214
08/12/2018 16:00:00 212
08/12/2018 17:00:00 208
08/12/2018 18:00:00 213
08/12/2018 19:00:00 207
08/12/2018 20:00:00 205
08/12/2018 21:00:00 205
08/12/2018 22:00:00 202
08/12/2018 23:00:00 228
09/12/2018 00:00:00 202
09/12/2018 01:00:00 238
09/12/2018 02:00:00 212
09/12/2018 03:00:00 227
09/12/2018 04:00:00 213
09/12/2018 05:00:00 206
09/12/2018 06:00:00 221
09/12/2018 07:00:00 222
09/12/2018 08:00:00 216
09/12/2018 09:00:00 220
09/12/2018 10:00:00 216
09/12/2018 11:00:00 217
09/12/2018 12:00:00 162
09/12/2018 13:00:00 163
09/12/2018 14:00:00 163
09/12/2018 15:00:00 160
09/12/2018 16:00:00 158
09/12/2018 17:00:00 159
09/12/2018 18:00:00 161
09/12/2018 19:00:00 157
09/12/2018 20:00:00 157
09/12/2018 21:00:00 153
09/12/2018 22:00:00 153
09/12/2018 23:00:00 176
10/12/2018 00:00:00 150
10/12/2018 01:00:00 174
10/12/2018 02:00:00 168
10/12/2018 03:00:00 167
10/12/2018 04:00:00 169
10/12/2018 05:00:00 162
10/12/2018 06:00:00 168
10/12/2018 07:00:00 166
10/12/2018 08:00:00 160
10/12/2018 09:00:00 162
10/12/2018 10:00:00 141
10/12/2018 11:00:00 144
10/12/2018 12:00:00 142
10/12/2018 13:00:00 141
10/12/2018 14:00:00 142
10/12/2018 15:00:00 169
10/12/2018 16:00:00 146
10/12/2018 17:00:00 173
10/12/2018 18:00:00 177
10/12/2018 19:00:00 175
10/12/2018 20:00:00 7278
10/12/2018 21:00:00 12604
10/12/2018 22:00:00 18154
10/12/2018 23:00:00 6844
11/12/2018 00:00:00 1350
11/12/2018 01:00:00 505
11/12/2018 02:00:00 1183
11/12/2018 03:00:00 508
11/12/2018 04:00:00 1488
11/12/2018 05:00:00 7071
11/12/2018 06:00:00 16453
11/12/2018 07:00:00 7076
11/12/2018 08:00:00 17310
11/12/2018 09:00:00 8063
11/12/2018 10:00:00 12681
11/12/2018 11:00:00 3678
11/12/2018 14:00:00 6026
11/12/2018 15:00:00 15569
11/12/2018 16:00:00 7069
11/12/2018 17:00:00 11772
11/12/2018 18:00:00 10167
11/12/2018 19:00:00 6159
11/12/2018 20:00:00 16450
11/12/2018 21:00:00 4106
11/12/2018 22:00:00 10115
11/12/2018 23:00:00 10355
12/12/2018 00:00:00 3203
12/12/2018 01:00:00 5160
12/12/2018 02:00:00 14468
12/12/2018 03:00:00 6591
12/12/2018 04:00:00 1376
12/12/2018 05:00:00 4053
12/12/2018 06:00:00 7947
12/12/2018 07:00:00 12433
12/12/2018 08:00:00 1434
12/12/2018 09:00:00 663
12/12/2018 10:00:00 1511
12/12/2018 11:00:00 654
12/12/2018 12:00:00 5661
12/12/2018 13:00:00 9817
12/12/2018 14:00:00 10148
12/12/2018 15:00:00 372
12/12/2018 16:00:00 1074
12/12/2018 17:00:00 672
12/12/2018 18:00:00 1094
12/12/2018 19:00:00 391
12/12/2018 20:00:00 2403
12/12/2018 21:00:00 827
12/12/2018 22:00:00 1108
12/12/2018 23:00:00 15575
13/12/2018 00:00:00 17219
13/12/2018 01:00:00 8255
13/12/2018 02:00:00 877
13/12/2018 03:00:00 180
13/12/2018 04:00:00 1782
13/12/2018 05:00:00 5284
13/12/2018 06:00:00 16191
13/12/2018 07:00:00 6251
13/12/2018 08:00:00 14533
13/12/2018 09:00:00 8138
13/12/2018 10:00:00 12629
13/12/2018 11:00:00 9701
13/12/2018 12:00:00 9869
13/12/2018 13:00:00 9554
13/12/2018 14:00:00 7106
13/12/2018 15:00:00 15094
13/12/2018 16:00:00 8622
13/12/2018 17:00:00 671
13/12/2018 18:00:00 1094
13/12/2018 19:00:00 370
13/12/2018 20:00:00 2332
13/12/2018 21:00:00 421

154 rows selected.

SQL>

The above output can then be used to create a pivot chart in Excel

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Background

For more information with regards to Exa Check, please read the following post:
How to use Oracle Exadata Database Machine Exa Check (exachk)

Slow Performance, Skipped Checks and Timeouts

When running the latest exachk (at time of writing, version 18.3.0_20180808), you may notice it takes a long time to run compared to the past.  This is due to the vast amount of additional checks carried out by the tool.  Due to this, you may also notice you get timeout issues reported in the report:

Killed Processes

exachk found that below commands were killed during the run, so some checks might have failed to execute properly. Refer to the “Slow Performance, Skipped Checks, and Timeouts” section of the user guide for corrective actions.

Killed check Manage ASM Audit File Directory Growth with cron (CHECK-ID: 9DEBED7B8DAB583DE040E50A1EC01BA0) at v1ex2dbadm01 because it timed out
Killed check Manage ASM Audit File Directory Growth with cron (CHECK-ID: 9DEBED7B8DAB583DE040E50A1EC01BA0) at v1ex2dbadm02 because it timed out

If you refer to the documentation “Slow Performance, Skipped Checks, and Timeouts“, you’ll see there are various parameters you can set in your environment to increase the default timeouts, which I have done below:

[root@v1ex2dbadm01 exachk]# export RAT_TIMEOUT=300
[root@v1ex2dbadm01 exachk]# export RAT_ROOT_TIMEOUT=900
[root@v1ex2dbadm01 exachk]# export RAT_PASSWORDCHECK_TIMEOUT=10
[root@v1ex2dbadm01 exachk]# export RAT_PROMPT_TIMEOUT=30
[root@v1ex2dbadm01 exachk]# export RAT_PROMPT_WAIT_TIMEOUT=60
[root@v1ex2dbadm01 exachk]# export RAT_REMOTE_RUN_TIMEOUT=10800
[root@v1ex2dbadm01 exachk]#
[root@v1ex2dbadm01 exachk]# env | grep RAT
RAT_ROOT_TIMEOUT=900
RAT_PROMPT_TIMEOUT=30
RAT_TIMEOUT=300
RAT_REMOTE_RUN_TIMEOUT=10800
RAT_PASSWORDCHECK_TIMEOUT=10
RAT_PROMPT_WAIT_TIMEOUT=60
[root@v1ex2dbadm01 exachk]#

Now when you run exachk, it will wait longer before killing processes.

In addition, if you run the “-dbparallelmax” option, you will increase the number of slave processes for database checks:

[root@v1ex2dbadm01 exachk]# ./exachk -dbparallelmax

PLEASE NOTE: This will consume more resources but will run quicker, so use with caution.  Alternatively you can run with “-dbparallel” with a acceptable number of processes and increase as per your requirements.

Now you should not have any timeouts and if you still do, then you will need to review the parameters above and increase again.  Alternatively raise an Support Request with Oracle Support.

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

On an Oracle Exadata Database Machine, the ‘/’ (root) is defaulted to a size of 30Gb, which can easily fill up.  Luckily this is just a Logical Volume and there’s normally lots of space available on the Logical Volume Group which is usually untapped.

Extending ‘/’

Identify how much space is used and free on ‘/’ using df:

[root@v1ex1dbadm01 ~]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       30G   22G  6.2G  79% /
[root@v1ex1dbadm01 ~]#

Display the current logical volume configuration using the lvs command:

[root@v1ex1dbadm01 ~]# lvs -o lv_name,lv_path,vg_name,lv_size
  LV                 Path                            VG      LSize
  LVDbOra1           /dev/VGExaDb/LVDbOra1           VGExaDb 200.00g
  LVDbSwap1          /dev/VGExaDb/LVDbSwap1          VGExaDb  24.00g
  LVDbSys1           /dev/VGExaDb/LVDbSys1           VGExaDb  30.00g
  LVDbSys2           /dev/VGExaDb/LVDbSys2           VGExaDb  30.00g
  LVDoNotRemoveOrUse /dev/VGExaDb/LVDoNotRemoveOrUse VGExaDb   1.00g
[root@v1ex1dbadm01 ~]#

PLEASE NOTE: On Exadata there are 2 SYS volumes, of which one is active and the other inactive.  These are used when patching the compute node, as one is a backup of the current and is used for rollback purposes.

Check the online resize option is available using the tune2fs command:

[root@v1ex1dbadm01 ~]# tune2fs -l /dev/VGExaDb/LVDbSys1 | grep resize_inode
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
[root@v1ex1dbadm01 ~]# tune2fs -l /dev/VGExaDb/LVDbSys2 | grep resize_inode
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
[root@v1ex1dbadm01 ~]#

If not available then the file system needs to be unmounted before resizing.  Refer to documentation:

Extending the root LVM Partition on Systems Running Oracle Exadata System Software Earlier than Release 11.2.3.2.1

Verify there’s space available in the Logical Volume Group using vgdisplay command:

[root@v1ex1dbadm01 ~]# vgdisplay -s
  "VGExaDb" 1.63 TiB  [285.00 GiB used / 1.36 TiB free]
[root@v1ex1dbadm01 ~]#

Finally if there’s enough space, then extend the Logical Volumes using lvextend command:

[root@v1ex1dbadm01 ~]# lvextend -L +70G /dev/VGExaDb/LVDbSys1
  Size of logical volume VGExaDb/LVDbSys1 changed from 30.00 GiB (7680 extents) to 100.00 GiB (25600 extents).
  Logical volume LVDbSys1 successfully resized.
[root@v1ex1dbadm01 ~]#
[root@v1ex1dbadm01 ~]# lvextend -L +70G /dev/VGExaDb/LVDbSys2
  Size of logical volume VGExaDb/LVDbSys2 changed from 30.00 GiB (7680 extents) to 100.00 GiB (25600 extents).
  Logical volume LVDbSys2 successfully resized.
[root@v1ex1dbadm01 ~]#

Followed by a resize of the file system using resize2fs command:

[root@v1ex1dbadm01 ~]# resize2fs /dev/VGExaDb/LVDbSys1
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/VGExaDb/LVDbSys1 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 7
Performing an on-line resize of /dev/VGExaDb/LVDbSys1 to 26214400 (4k) blocks.
The filesystem on /dev/VGExaDb/LVDbSys1 is now 26214400 blocks long.
[root@v1ex1dbadm01 ~]#

The inactive SYS volume may give you errors as shown below:

[root@v1ex1dbadm01 ~]# resize2fs /dev/VGExaDb/LVDbSys2
resize2fs 1.43-WIP (20-Jun-2013)
Please run 'e2fsck -f /dev/VGExaDb/LVDbSys2' first.
[root@v1ex1dbadm01 ~]#

In which case, you just need to run the command to check the file system for error that may have occurred with journal-ling after unclear shutdown:

[root@v1ex1dbadm01 ~]# e2fsck -f /dev/VGExaDb/LVDbSys2
e2fsck 1.43-WIP (20-Jun-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/VGExaDb/LVDbSys2: 111629/1966080 files (0.1% non-contiguous), 5031185/7864320 blocks
[root@v1ex1dbadm01 ~]#

Now re-run the resize of the file system using resize2fs command:

[root@v1ex1dbadm01 ~]# resize2fs /dev/VGExaDb/LVDbSys2
resize2fs 1.43-WIP (20-Jun-2013)
Resizing the filesystem on /dev/VGExaDb/LVDbSys2 to 26214400 (4k) blocks.
The filesystem on /dev/VGExaDb/LVDbSys2 is now 26214400 blocks long.
[root@v1ex1dbadm01 ~]#

You should now see ‘/’ with additional 70Gb less formatting:

[root@v1ex1dbadm01 ~]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       99G   22G   72G  24% /
[root@v1ex1dbadm01 ~]#

Also see the Logical Volumes are now 100Gb from 30Gb:

[root@v1ex1dbadm01 ~]# lvs -o lv_name,lv_path,vg_name,lv_size
  LV                 Path                            VG      LSize
  LVDbOra1           /dev/VGExaDb/LVDbOra1           VGExaDb 200.00g
  LVDbSwap1          /dev/VGExaDb/LVDbSwap1          VGExaDb  24.00g
  LVDbSys1           /dev/VGExaDb/LVDbSys1           VGExaDb 100.00g
  LVDbSys2           /dev/VGExaDb/LVDbSys2           VGExaDb 100.00g
  LVDoNotRemoveOrUse /dev/VGExaDb/LVDoNotRemoveOrUse VGExaDb   1.00g
[root@v1ex1dbadm01 ~]#

Documentation for reference:
Extending the root LVM Partition on Systems Running Oracle Exadata System Software Release 11.2.3.2.1 or Later

Related Post:
Extending a Non-root LVM Partition on Exadata

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Applies to ONLY Oracle Database 10gR1 through to 12cR1.
In Oracle Database 12cR2, this feature is replaced.  A new blog post of this feature will be posted in due course.

Since Oracle 10g, you can monitor indexes to see if they are being used or not.  Which is very useful as indexes, consume unnecessary CPU and I/O on DML activity if not used.  Therefore, it’s recommended to monitor indexes and any unused indexes can be dropped, not only freeing vital space but CPU and I/O, resulting in better throughput.

It recommended to enable index monitoring for an appropriate period that is representative of your full workload cycle.  For example if you run a report every week, month or even year, just to catch all those activities.

Please Note: It’s important that foreign key constraints have associated indexes to avoid any table level lock on DML.  These indirect use of indexes will not be detected by index monitoring.  So be mindful when dropping indexes that are associated with foreign key constraint.

Enable Index Monitoring

To Enable Index Monitoring, you alter the index as shown below:

ALTER INDEX . MONITORING USAGE;
Checking Index Usage

To check if the index has been used, will depends on the version of Oracle:

Oracle 10g/11g
SELECT * FROM V$OBJECT_USAGE;

Database Reference 10g – V$OBJECT_USAGE
Database Reference 11g – V$OBJECT_USAGE

Please Note: Only shows the indexes belonging to the login user.

To see all indexes in the database regardless of the logged on user:

select 
do.owner,
t.name table_name, io.name index_name,
decode(bitand(i.flags, 65536), 0, 'NO', 'YES') monitoring,
decode(bitand(ou.flags, 1), 0, 'NO', 'YES') used,
ou.start_monitoring,
ou.end_monitoring 
from 
sys.obj$ io, 
sys.obj$ t, 
sys.ind$ i, 
sys.object_usage ou,
dba_objects do 
where i.obj# = ou.obj# 
and io.obj# = ou.obj# 
and t.obj# = i.bo#
and i.obj# = do.object_id
order by 1, 2, 3;

DISCLAIMER: As accessing data dictionary objects directly, this query is not guaranteed to work.

Oracle 12cR1

“The V$OBJECT_USAGE view is deprecated in Oracle Database 12c Release 1 (12.1) and maintained for backward compatibility. Support for this view may be removed in a future release. Oracle recommends that you use the USER_OBJECT_USAGE view instead of the V$OBJECT_USAGE view.”

Database Reference 12cR1 – V$OBJECT_USAGE

For the current logon user:

SELECT * FROM USER_OBJECT_USAGE;

Database Reference 12cR1 – USER_OBJECT_USAGE

For all users:

SELECT * FROM DBA_OBJECT_USAGE;

Database Reference 12cR1 – DBA_OBJECT_USAGE

Dis-enable Index Monitoring

To Dis-enable Index Monitoring, you alter the index as shown below:

ALTER INDEX . NOMONITORING USAGE;
Demo of Index Monitoring

The content of the index_monitoring_demo.sql is below:

-------------------------------------------------------------
--
-- Index Monitoring Demo
--
-- File Name: index_monitoring_demo.sql (blog.zeddba.com)
-- Created: Zahid Anwar (ZedDBA)
-- 
-- Date: 16/07/2018
-- Version: 1.0
--
-- History
--
--1.0 16/07/2018 ZA Initial Script
--
-------------------------------------------------------------
set echo on
set pages 999 lines 400

CREATE TABLE "EMP"
( "ID" NUMBER,
"FIRST_NAME" VARCHAR2(50),
"LAST_NAME" VARCHAR2(50)
);

pause Press Enter

INSERT INTO "EMP" VALUES (1, 'Zahid', 'Anwar');
INSERT INTO "EMP" VALUES (2, 'Scott', 'Tiger');
INSERT INTO "EMP" VALUES (3, 'Joe', 'Bloggs');

commit;

pause Press Enter

set autotrace on
col id format 99
col first_name format a50
col last_Name format a50

select * from emp;

pause Press Enter

create index emp_id on emp (id);

pause Press Enter

select * from emp where id = 1;

pause Press Enter

set autotrace off

col INDEX_NAME format a20
col TABLE_NAME format a20
col MONITORING format a15
col USED format a10
select * from USER_OBJECT_USAGE;

pause Press Enter

alter index emp_id monitoring usage;

pause Press Enter

select * from USER_OBJECT_USAGE;

pause Press Enter

set autotrace on

select * from emp;

pause Press Enter

set autotrace off

select * from USER_OBJECT_USAGE;

pause Press Enter

set autotrace on

select * from emp where id = 1;

pause Press Enter

set autotrace off

select * from USER_OBJECT_USAGE;

pause Press Enter

alter index emp_id nomonitoring usage;

pause Press Enter

select * from USER_OBJECT_USAGE;

pause Press Enter

DROP TABLE "EMP" PURGE;

Below is the output of the script index_monitoring_demo.sql:

SQL> @index_monitoring_demo.sql
SQL> set pages 999 lines 400
SQL>
SQL> CREATE TABLE "EMP"
2 ( "ID" NUMBER,
3 "FIRST_NAME" VARCHAR2(50),
4 "LAST_NAME" VARCHAR2(50)
5 );

Table created.

SQL>
SQL> pause Press Enter
Press Enter

Table EMP is created in my schema ZEDDBA, please ensure you have appropriate schema with correct privileges.  Next we insert 3 rows into the table EMP:

SQL> INSERT INTO "EMP" VALUES (1, 'Zahid', 'Anwar');

1 row created.

SQL> INSERT INTO "EMP" VALUES (2, 'Scott', 'Tiger');

1 row created.

SQL> INSERT INTO "EMP" VALUES (3, 'Joe', 'Bloggs');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> pause Press Enter
Press Enter

With the 3 rows inserted and committed, we select from the EMP table with auto trace on, so we can see the execution plan:

SQL> set autotrace on
SQL> col id format 99
SQL> col first_name format a50
SQL> col last_Name format a50
SQL>
SQL> select * from emp;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar
2 Scott Tiger
3 Joe Bloggs


Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 201 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL| EMP | 3 | 201 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
12 recursive calls
0 db block gets
28 consistent gets
0 physical reads
0 redo size
784 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Please Note: This database is an Exadata Machine, hence the “TABLE ACCESS STORAGE FULL“, otherwise would be “TABLE ACCESS FULL“.

As we can see the Execution Plan was Full Table Scan (FTS) as this was the only access method available.  Next, we create an index on the emp_id:

SQL> create index emp_id on emp (id);

Index created.

SQL>
SQL> pause Press Enter
Press Enter

Next, we query the table with a predicate on emp_id:

SQL> select * from emp where id = 1;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar


Execution Plan
----------------------------------------------------------
Plan hash value: 458854468

----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 67 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| EMP | 1 | 67 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | EMP_ID | 1 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=1)

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
11 recursive calls
0 db block gets
19 consistent gets
0 physical reads
0 redo size
695 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Now, we can see the Execution Plan used the newly created index.  Next, we check if the index was used:

SQL> set autotrace off
SQL>
SQL> col INDEX_NAME format a20
SQL> col TABLE_NAME format a20
SQL> col MONITORING format a15
SQL> col USED format a10
SQL> select * from USER_OBJECT_USAGE;

no rows selected

SQL>
SQL> pause Press Enter
Press Enter

We get no rows back as we didn’t turn on index monitoring, which isn’t on by default.  So next, we enable index monitoring:

SQL> alter index emp_id monitoring usage;

Index altered.

SQL>
SQL> pause Press Enter
Press Enter

Next, we check the index usage and we can see it monitoring but not used:

SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        YES        NO   07/16/2018 17:53:59

SQL>
SQL> pause Press Enter
Press Enter

Next, we query the table with no predicate to do a FTS:

SQL> set autotrace on
SQL>
SQL> select * from emp;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar
2 Scott Tiger
3 Joe Bloggs


Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 201 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL| EMP | 3 | 201 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
51 recursive calls
0 db block gets
54 consistent gets
0 physical reads
132 redo size
784 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
3 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Next, we check if the index monitoring reflect the usage:

SQL> set autotrace off
SQL>
SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        YES        NO   07/16/2018 17:53:59

SQL>
SQL> pause Press Enter
Press Enter

Next, we query the table with a predicate to do Index Range Scan:

SQL> set autotrace on
SQL>
SQL> select * from emp where id = 1;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar


Execution Plan
----------------------------------------------------------
Plan hash value: 458854468

----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 67 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| EMP | 1 | 67 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | EMP_ID | 1 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=1)

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
13 recursive calls
3 db block gets
24 consistent gets
0 physical reads
876 redo size
695 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Next, we check if the index monitoring reflect the usage:

SQL> set autotrace off
SQL>
SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        YES        YES  07/16/2018 17:53:59

SQL>
SQL> pause Press Enter
Press Enter

Next, we now turn off index monitoring, as we know it’s been used:

SQL> alter index emp_id nomonitoring usage;

Index altered.

SQL>
SQL> pause Press Enter
Press Enter

Next, we check to see if monitoring is now disabled:

SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        NO         YES  07/16/2018 17:53:59 07/16/2018 17:54:09

SQL>
SQL> pause Press Enter
Press Enter

Finally, we drop the demo EMP table:

SQL> DROP TABLE "EMP" PURGE;

Table dropped.

SQL>

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview