Loading...

Follow Database Administration Tips on Feedspot

Continue with Google
Continue with Facebook
or

Valid
DBA Bundle V5.2 is now available for download:
https://www.dropbox.com/s/k96rl0f4g39ukih/DBA_BUNDLE5.tar?dl=0

It comes with the following features:

- rebuild_table.sh has been totally re-developed to utilize ONLINE table rebuild features such as DBMS_REDEFINITION and ALTER TABLE MOVE ONLINE. Check this link for more details [https://dba-tips.blogspot.com/2019/05/rebuild-table-script-and-reclaim-wasted.html]

- Added the reporting of the "Top Fragmented Tables" in the daily health check report script dbdailychk.sh.

- Fixing bugs and enhancing the execution time of the following scripts:
dbdailychk.sh
dbalarm.sh
backup_ctrl_spf_AWR.sh
gather_stats.sh
db_locks.sh
active_sessions.sh

If you are new to the DBA BUNDLE the following link will give you a detailed idea:
http://dba-tips.blogspot.com/2014/02/oracle-database-administration-scripts.html
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
I have written a powerful script to rebuild a table and its indexes on Oracle, although I'm reluctant to share scripts that deal directly with the data, but thought it will be a great help for DBAs if they use it wisely.

Before you use this script, please read the full post carefully to understand how it works.

First, I'm sharing this script in the hope that it will be helpful for you without any warranty, you have to test the script yourself on a test environment before running against production.

How it works:

The script can rebuild one table and its indexes at a time, once you enter the OWNER and TABLE_NAME it will do the following:

In A Nutshell, It will check the available options for rebuilding the table:
Option 1: Will check if DBMS_REDEFINITION package can be used based on the database edition (Standard/Enterprise) and then take the user through the rest of the steps.

Option 2: If DBMS_REDEFINITION is not available in the current edition or the user didn't wish to proceed with DBMS_REDEFINITION, the script will move to ALTER TABLE MOVE option, if the database version is 12.2 or higher, the script will utilize "ALTER TABLE MOVE ONLINE" command which will rebuild the table with a negligible downtime, otherwise it will use "ALTER TABLE MOVE" command which will result in downtime on the table throughout the whole rebuild operation.

The following flowchart will explain the mechanism of the script in details: [I'm grateful to draw.io for making the drawing of this flowchart easy and free of cost for me]



If you are still confused, read the script prompted messages carefully and it will explain itself.

Here is the download link:
https://www.dropbox.com/s/bmgbc0u76okokcs/rebuild_table.sh?dl=0

In case the download link is broken you can copy the script from the below GitHub version:
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
I had a complaint from one of the readers that dbalarm script for monitoring the DB is taking very long time to run against one RAC DB [11.2.0.3] especially in the part of reporting locked sessions on the database, when I dug deeper found that most of time is getting consumed by this statement:

select
substr(s.INST_ID||'|'||s.OSUSER||'/'||s.USERNAME||'| '||s.sid||','||s.serial#||' |'||substr(s.MACHINE,1,22)||'|'||substr(s.MODULE,1,18),1,75)"I|OS/DB USER|SID,SER|MACHN|MOD"
,substr(s.status||'|'||round(w.WAIT_TIME_MICRO/1000000)||'|'||LAST_CALL_ET||'|'||to_char(LOGON_TIME,'ddMon HH24:MI'),1,34) "ST|WAITD|ACT_SINC|LOGIN"
,substr(w.event,1,24) "EVENT"
,s.PREV_SQL_ID||'|'||s.SQL_ID||'|'||round(w.TIME_REMAINING_MICRO/1000000) "PREV|CURRENT_SQL|REMAIN_SEC"
from    gv$session s, gv$session_wait w
where   s.sid in (select distinct FINAL_BLOCKING_SESSION from gv$session where FINAL_BLOCKING_SESSION is not null)
and     s.USERNAME is not null
and     s.sid=w.sid
and     s.FINAL_BLOCKING_SESSION is null
/

Elapsed: 00:00:11.34

Execution plan was showing this:
-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        | |     1 |   271 |     1 (100)| 00:00:01 |   | |       |
|*  1 |  FILTER        | | | |       | |   | |       |
|*  2 |   HASH JOIN        | |   100 | 27100 |     1 (100)| 00:00:01 |   | |       |
|   3 |    PX COORDINATOR        | |     1 |   198 |     0   (0)| 00:00:01 |   | |       |
|   4 |     PX SEND QC (RANDOM)        | :TQ20000 |     1 |   270 |     0   (0)| 00:00:01 |  Q2,00 | P->S | QC (RAND)  |
|*  5 |      VIEW        | GV$SESSION | | |       | |  Q2,00 | PCWP |       |
|   6 |       NESTED LOOPS        | |     1 |   270 |     0   (0)| 00:00:01 |  Q2,00 | PCWP |       |
|   7 |        NESTED LOOPS        | |     1 |   257 |     0   (0)| 00:00:01 |  Q2,00 | PCWP |       |
|*  8 | FIXED TABLE FULL       | X$KSUSE |     1 |   231 |     0   (0)| 00:00:01 |  Q2,00 | PCWP |       |
|*  9 | FIXED TABLE FIXED INDEX| X$KSLWT (ind:1) |     1 |    26 |     0   (0)| 00:00:01 |  Q2,00 | PCWP |       |
|* 10 |        FIXED TABLE FIXED INDEX | X$KSLED (ind:2) |     1 |    13 |     0   (0)| 00:00:01 |  Q2,00 | PCWP |       |
|  11 |    PX COORDINATOR        | |   100 |  7300 |     0   (0)| 00:00:01 |   | |       |
|  12 |     PX SEND QC (RANDOM)        | :TQ30000 |   100 | 12500 |     0   (0)| 00:00:01 |  Q3,00 | P->S | QC (RAND)  |
|  13 |      VIEW        | GV$SESSION_WAIT | | |       | |  Q3,00 | PCWP |       |
|  14 |       NESTED LOOPS        | |   100 | 12500 |     0   (0)| 00:00:01 |  Q3,00 | PCWP |       |
|  15 |        FIXED TABLE FULL        | X$KSLWT |   100 |  7800 |     0   (0)| 00:00:01 |  Q3,00 | PCWP |       |
|* 16 |        FIXED TABLE FIXED INDEX | X$KSLED (ind:2) |     1 |    47 |     0   (0)| 00:00:01 |  Q3,00 | PCWP |       |
|  17 |   PX COORDINATOR        | |     1 |    13 |     0   (0)| 00:00:01 |   | |       |
|  18 |    PX SEND QC (RANDOM)        | :TQ10000 |     1 |    91 |     0   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|* 19 |     VIEW        | GV$SESSION | | |       | |  Q1,00 | PCWP |       |
|  20 |      NESTED LOOPS        | |     1 |    91 |     0   (0)| 00:00:01 |  Q1,00 | PCWP |       |
|  21 |       NESTED LOOPS        | |     1 |    78 |     0   (0)| 00:00:01 |  Q1,00 | PCWP |       |
|* 22 |        FIXED TABLE FULL        | X$KSUSE |     1 |    52 |     0   (0)| 00:00:01 |  Q1,00 | PCWP |       |
|* 23 |        FIXED TABLE FIXED INDEX | X$KSLWT (ind:1) |     1 |    26 |     0   (0)| 00:00:01 |  Q1,00 | PCWP |       |
|* 24 |       FIXED TABLE FIXED INDEX  | X$KSLED (ind:2) |     1 |    13 |     0   (0)| 00:00:01 |  Q1,00 | PCWP |       |
-------------------------------------------------------------------------------------------------------------------------------

When traced the session found the following in the trace:

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  FILTER  (cr=0 pr=0 pw=0 time=11647894 us)
      2818       2818       2818   HASH JOIN  (cr=0 pr=0 pw=0 time=74814 us cost=1 size=27100 card=100)
      1875       1875       1875    PX COORDINATOR  (cr=0 pr=0 pw=0 time=35972 us cost=0 size=198 card=1)
         0          0          0     PX SEND QC (RANDOM) :TQ20000 (cr=0 pr=0 pw=0 time=0 us cost=0 size=270 card=1)
         0          0          0      VIEW  GV$SESSION (cr=0 pr=0 pw=0 time=0 us)
         0          0          0       NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=270 card=1)
         0          0          0        NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=257 card=1)
         0          0          0         FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=0 us cost=0 size=231 card=1)
         0          0          0         FIXED TABLE FIXED INDEX X$KSLWT (ind:1) (cr=0 pr=0 pw=0 time=0 us cost=0 size=26 card=1)
         0          0          0        FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=0 us cost=0 size=13 card=1)
      2018       2018       2018    PX COORDINATOR  (cr=0 pr=0 pw=0 time=5021 us cost=0 size=7300 card=100)
         0          0          0     PX SEND QC (RANDOM) :TQ30000 (cr=0 pr=0 pw=0 time=0 us cost=0 size=12500 card=100)
         0          0          0      VIEW  GV$SESSION_WAIT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0       NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=12500 card=100)
         0          0          0        FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=0 us cost=0 size=7800 card=100)
         0          0          0        FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=0 us cost=0 size=47 card=1)
         0          0          0   PX COORDINATOR  (cr=0 pr=0 pw=0 time=11562142 us cost=0 size=13 card=1)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=7157212 us cost=0 size=91 card=1)
         0          0          0     VIEW  GV$SESSION (cr=0 pr=0 pw=0 time=7156047 us)
   1488896    1488896    1488896      NESTED LOOPS  (cr=0 pr=0 pw=0 time=6553126 us cost=0 size=91 card=1)
   1488896    1488896    1488896       NESTED LOOPS  (cr=0 pr=0 pw=0 time=5569809 us cost=0 size=78 card=1)
   1488896    1488896    1488896        FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=3967204 us cost=0 size=52 card=1)
   1488896    1488896    1488896        FIXED TABLE FIXED INDEX X$KSLWT (ind:1) (cr=0 pr=0 pw=0 time=1142831 us cost=0 size=26 card=1)
   1488896    1488896    1488896       FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=567563 us cost=0 size=13 card=1)

Ok. Now I can see that most of the time is getting consumed on the fixed tables in red color.

Now, let's check running the same statement but using RULE Based Optimizer (RBO) this time:

select  /*+RULE*/
substr(s.INST_ID||'|'||s.OSUSER||'/'||s.USERNAME||'| '||s.sid||','||s.serial#||' |'||substr(s.MACHINE,1,22)||'|'||substr(s.MODULE,1,18),1,75)"I|OS/DB USER|SID,SER|MACHN|MOD"
,substr(s.status||'|'||round(w.WAIT_TIME_MICRO/1000000)||'|'||LAST_CALL_ET||'|'||to_char(LOGON_TIME,'ddMon HH24:MI'),1,34) "ST|WAITD|ACT_SINC|LOGIN"
,substr(w.event,1,24) "EVENT"
,s.PREV_SQL_ID||'|'||s.SQL_ID||'|'||round(w.TIME_REMAINING_MICRO/1000000) "PREV|CURRENT_SQL|REMAIN_SEC"
from    gv$session s, gv$session_wait w
where   s.sid in (select distinct FINAL_BLOCKING_SESSION from gv$session where FINAL_BLOCKING_SESSION is not null)
and     s.USERNAME is not null
and     s.sid=w.sid
and     s.FINAL_BLOCKING_SESSION is null
/


Elapsed: 00:00:00.09

Wow, RULE Based Optimizer (RBO) runs the statement in 9 centiseconds much faster than CBO which ran it in 11 seconds. How this happened?

The following was the execution plan when using RBO
------------------------------------------------------------------------------------
| Id  | Operation     | Name       |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |       |        |      |    |
|   1 |  MERGE JOIN     |       |        |      |    |
|   2 |   MERGE JOIN     |       |        |      |    |
|   3 |    SORT JOIN     |       |        |      |    |
|   4 |     PX COORDINATOR     |       |        |      |    |
|   5 |      PX SEND QC (RANDOM)    | :TQ10000       |  Q1,00 | P->S | QC (RAND)  |
|   6 |       VIEW     | GV$SESSION_WAIT |  Q1,00 | PCWP |    |
|   7 |        MERGE JOIN     |       |  Q1,00 | PCWP |    |
|   8 | FIXED TABLE FULL    | X$KSLED       |  Q1,00 | PCWP |    |
|*  9 | SORT JOIN     |       |  Q1,00 | PCWP |    |
|  10 | FIXED TABLE FULL   | X$KSLWT       |  Q1,00 | PCWP |    |
|* 11 |    SORT JOIN     |       |        |      |    |
|  12 |     PX COORDINATOR     |       |        |      |    |
|  13 |      PX SEND QC (RANDOM)    | :TQ20000       |  Q2,00 | P->S | QC (RAND)  |
|* 14 |       VIEW     | GV$SESSION      |  Q2,00 | PCWP |    |
|  15 |        MERGE JOIN     |       |  Q2,00 | PCWP |    |
|  16 | SORT JOIN     |       |  Q2,00 | PCWP |    |
|  17 | MERGE JOIN     |       |  Q2,00 | PCWP |    |
|  18 |   SORT JOIN     |       |  Q2,00 | PCWP |    |
|  19 |    FIXED TABLE FULL | X$KSLWT       |  Q2,00 | PCWP |    |
|* 20 |   SORT JOIN     |       |  Q2,00 | PCWP |    |
|  21 |    FIXED TABLE FULL | X$KSLED       |  Q2,00 | PCWP |    |
|* 22 | SORT JOIN     |       |  Q2,00 | PCWP |    |
|* 23 | FIXED TABLE FULL   | X$KSUSE       |  Q2,00 | PCWP |    |
|* 24 |   SORT JOIN     |       |        |      |    |
|  25 |    VIEW     | VW_NSO_1       |        |      |    |
|  26 |     SORT UNIQUE     |       |        |      |    |
|  27 |      PX COORDINATOR     |       |        |      |    |
|  28 |       PX SEND QC (RANDOM)   | :TQ30000       |  Q3,00 | P->S | QC (RAND)  |
|* 29 |        VIEW     | GV$SESSION      |  Q3,00 | PCWP |    |
|  30 | MERGE JOIN     |       |  Q3,00 | PCWP |    |
|  31 | SORT JOIN     |       |  Q3,00 | PCWP |    |
|  32 |   MERGE JOIN     |       |  Q3,00 | PCWP |    |
|  33 |    SORT JOIN     |       |  Q3,00 | PCWP |    |
|  34 |     FIXED TABLE FULL| X$KSLWT       |  Q3,00 | PCWP |    |
|* 35 |    SORT JOIN     |       |  Q3,00 | PCWP |    |
|  36 |     FIXED TABLE FULL| X$KSLED       |  Q3,00 | PCWP |    |
|* 37 | SORT JOIN     |       |  Q3,00 | PCWP |    |
|* 38 |   FIXED TABLE FULL  | X$KSUSE       |  Q3,00 | PCWP |    |
------------------------------------------------------------------------------------

Checking the trace for the execution time when using RBO:

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  MERGE JOIN  (cr=0 pr=0 pw=0 time=87623 us)
         1          1          1   MERGE JOIN  (cr=0 pr=0 pw=0 time=65256 us)
         7          7          7    SORT JOIN (cr=0 pr=0 pw=0 time=23492 us)
      2024       2024       2024     PX COORDINATOR  (cr=0 pr=0 pw=0 time=14913 us)
         0          0          0      PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
         0          0          0       VIEW  GV$SESSION_WAIT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0        MERGE JOIN  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0         FIXED TABLE FULL X$KSLED (cr=0 pr=0 pw=0 time=0 us)
         0          0          0         SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0          FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=0 us)
         1          1          1    SORT JOIN (cr=0 pr=0 pw=0 time=41766 us)
      1877       1877       1877     PX COORDINATOR  (cr=0 pr=0 pw=0 time=29246 us)
         0          0          0      PX SEND QC (RANDOM) :TQ20000 (cr=0 pr=0 pw=0 time=0 us)
         0          0          0       VIEW  GV$SESSION (cr=0 pr=0 pw=0 time=0 us)
         0          0          0        MERGE JOIN  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0         SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0          MERGE JOIN  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0           SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0            FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0           SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0            FIXED TABLE FULL X$KSLED (cr=0 pr=0 pw=0 time=0 us)
         0          0          0         SORT JOIN (cr=0 pr=0 pw=0 time=0 us)
         0          0          0          FIXED TABLE FULL X$KSUSE (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   SORT JOIN (cr=0 pr=0 pw=0 time=22335 us)
         0          0          0    VIEW  VW_NSO_1 (cr=0 pr=0 pw=0 time=22327 us)
         0          0          0     SORT UNIQUE (cr=0 pr=0 pw=0 time=22325 us)
         0          0          0      PX COORDINATOR  (cr=0 pr=0 pw=0 time=22293 us)
         0          0          0       PX SEND QC (RANDOM) :TQ30000 (cr=0 pr=0 pw=0 time=0 us)
         0          0          0        VIEW  GV$SESSION (cr=0 pr=0 pw=0 time=0 us)
         0          0          0         MERGE JOIN  (cr=0 pr=0 pw=0 time=0 us)
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
While I was testing something on a 12.1 test database got this below error whenever I'm trying to execute specific admin commands:

SQL> drop user xx;
drop user xx
*
ERROR at line 1:
ORA-04088: error during execution of trigger 'SYS.XDB_PI_TRIG'
ORA-00604: error occurred at recursive SQL level 1
ORA-06550: line 3, column 13:
PLS-00302: component 'IS_VPD_ENABLED' must be declared
ORA-06550: line 3, column 5:
PL/SQL: Statement ignored


SQL> alter table bb move online compress;  
alter table bb move online compress
            *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-04088: error during execution of trigger 'SYS.XDB_PI_TRIG'
ORA-00604: error occurred at recursive SQL level 2
ORA-06550: line 3, column 13:
PLS-00302: component 'IS_VPD_ENABLED' must be declared
ORA-06550: line 3, column 5:
PL/SQL: Statement ignored

The above was just a sample but the error with showing up with lots of admin commands!

I checked the trigger SYS.XDB_PI_TRIG which causing this error and it was already valid, so I decided to DISABLE it, and then admin commands ran as usual:

SQL> alter trigger SYS.XDB_PI_TRIG disable;

Trigger altered.


Above failing admin commands have run smoothly:

SQL> alter table bb move online compress; 

Table altered.

Frankly speaking, I tried to google that error without any success, I didn't dig deeper, so I took the shortest/laziest way and disabled the root cause trigger as a dirty fix, the database where I disabled that trigger was a test DB, most probably one of my fancy test scenarios caused this issue to happen.

In case you have the same error on a Production Database I strongly recommend you to contact Oracle Support before disabling the above-mentioned trigger.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
As you know there are two types of cloud services AWS provides (EC2 & RDS) while EC2 let you have an Operating System OS access to the DB machine including root access, RDS doesn't give you any kind of OS access. In RDS the master admin user which AWS provides to you has least admin privileges (neither has SYSDBA nor DBA role) as the database is supposed to be maintained by AWS. Though using this user to perform simple admin tasks like import a schema is a bit challenging on RDS, without an OS access you won't be able to use commands like impdp or imp the thing will force you to start exploring the alternative Oracle packages which can do this job from inside the DB, and yes Oracle has many built-in packages allow you to perform lots of tasks without the need to have an OS access.

Actually, Amazon already well documented importing a schema into RDS in this link but I thought to explain it in more details like a real-world task:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html

Task Summary:
I'll be exporting "EPS_OWNER" schema on an 11.2.0.3 database resides on AWS EC2 Linux instance and upload the export dump file to S3 bucket, then import the dump file into a 12.2.0.1 AWS RDS database to "EPS" schema.

Prerequisites:
- An AWS S3 bucket must be created and Both Source EC2 and Target RDS must have RW access to it through a role. [S3 bucket is kind of shared storage between AWS cloud systems where you can upload/download the files to it, it will be used during this demo to share the export dump file between EC2 source instance and RDS target instance].

Step1: Export the schema on Source [EC2 instance]:
I already have access to OS oracle user on the source EC2 instance so I used exportdata script to export EPS_OWNER schema, it generated the pre and post scripts to run before and after the import task on the target, but because I'll import to a schema with a different name so I adjusted those scripts by replacing the source schema name "EPS_OWNER" with the target schema name "EPS".

Step2: Upload the export file to S3 Bucket from Source [EC2 instance]:
In case the bucket is not yet configured on the source machine you can use the following AWSCLI command to configure it providing the bucket's "Access Key" and "Secret Access Key":

  # aws configure
  AWS Access Key ID [None]: AEFETFWNINTIHMLBWII5Q
  AWS Secret Access Key [None]: EdfefrgzA1+kEtfs2kg43RtdSv/Il/wwxtD6vthty
  Default region name [None]: 
  Default output format [None]: 

 Upload the export dump files to the S3 bucket:
  # cd /backup
  # aws s3 cp EXPORT_eps_owner_STG_04-03-19.dmp  s3://eps-bucket

Step2: Download the export file from the S3 Bucket to the Target [RDS instance]:
Remember, there is no OS access on RDS, so we will connect to the database using any tools such as SQL Developer using the RDS master user.

Use the AWS built-in package "rdsadmin.rdsadmin_s3_tasks" to download the dump file from S3 bucket to DATA_PUMP_DIR:

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  'eps-bucket',       
      p_directory_name =>  'DATA_PUMP_DIR') 
   AS TASK_ID FROM DUAL; 

It will return a TASK ID:

TASK_ID                                                                        
--------------------------
1554286165468-636   

Use this TASK_ID to monitor the download progress by running this statement:
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1554286165468-636.log'));

Once the download complete, query the downloaded files under DATA_PUMP_DIR using this query:
select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

Now the AWS related tasks are done, let's jump to the import part which is purely Oracle's.

Step3: Create the tablespace and the target schema user on the Target [RDS instance]:
In case the target user does not yet exist on the target RDS database, you can go ahead and create it along with its tablespace.

-- Create a tablespace: [Using Oracle Managed Files OMF]
CREATE SMALLFILE TABLESPACE "TBS_EPS" DATAFILE SIZE 100M AUTOEXTEND ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;

-- Create the user: [Here the user as per my business requirements will be different than the original user on the Source DB]
CREATE USER EPS IDENTIFIED  BY "test123" DEFAULT TABLESPACE TBS_EPS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON TBS_EPS PROFILE DEFAULT;
GRANT CREATE SESSION TO EPS;
GRANT CREATE JOB TO EPS;
GRANT CREATE PROCEDURE TO EPS;
GRANT CREATE SEQUENCE TO EPS;
GRANT CREATE TABLE TO EPS;

Step4: Import the dump file on the Target [RDS instance]:
By the RDS master user run:

DECLARE
hdnl NUMBER;
BEGIN
hdnl := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);
DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'EXPORT_eps_owner_STG_04-03-19.dmp', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file, reusefile => 1);
--DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'EXPORT_eps_owner_STG_04-03-19.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file);
--DBMS_DATAPUMP.METADATA_FILTER(hdnl,'SCHEMA_EXPR','IN (''EPS_OWNER'')');
--DBMS_DATAPUMP.SET_PARAMETER(hdnl,'TABLE_EXISTS_ACTION','SKIP');
DBMS_DATAPUMP.METADATA_REMAP(hdnl,'REMAP_SCHEMA','EPS_OWNER','EPS');
DBMS_DATAPUMP.START_JOB(hdnl);
END;
/     

The hashed parameters in gray color are there for reference:

--DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'EXPORT_eps_owner_STG_04-03-19.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file);
In case you want to write the import operation log into a log file.

--DBMS_DATAPUMP.METADATA_FILTER(hdnl,'SCHEMA_EXPR','IN (''EPS_OWNER'')');
In case the exported schema "EPS_OWNER" will be imported on an already existing schema with the same name, which is not my case here, in case you will use this parameter you should NOT use DBMS_DATAPUMP.METADATA_REMAP with it.

--DBMS_DATAPUMP.SET_PARAMETER(hdnl,'TABLE_EXISTS_ACTION','SKIP');
This tells the import package what to do if it finds the table already exists during the import, it accepts the following parameters:
SKIP             --> Don't import anything on the already exist table.
TRUNCATE --> Truncate the already exist table and import the data.
APPEND      --> Leave the currently exist data intact and load the data from the source next to them.

In case you are used wrong parameters or bad combination e.g. using METADATA_FILTER instead of METDATA_REMAP when you are importing to a schema with a different name, you will get a bunch of errors similar to the below cute unclear ones:

ORA-31627: API call succeeded but more information is available
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7143
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7137

ORA-06512: at line 7

Check the progress of the imported objects:
select object_type,count(*) from dba_objects where owner='EPS' group by object_type;

Run the After Import script that generated by exportdata script at Step 1 after replacing the original exported schema name EPS_OWNER with the target imported schema name EPS.

Check the invalid objects:
col object_name for a45
select object_name,object_type,status from dba_objects where owner='EPS' and status<>'VALID';

Step5: [Optional] Delete the dump file from the Target [RDS instance]:

exec utl_file.fremove('DATA_PUMP_DIR','EXPORT_eps_owner_STG_04-03-19.dmp');

select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;


References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
"Hello brother" were the last words said by Daoud Nabi "the first victim of NZ Mosque shooting" to his killer before he is brutally gunned down!
مرحبا أخى هى اخر كلمات قالها دواود نبى (رحمه الله عليه) "أول ضحايا حوادث إطلاق النار فى مساجد نيوزيلندا" ليستـقبل بها قاتله الذى أرداه قتيلا بوحشية.

Image by an Indonesian artist. الصورة لفنان أندونيسى
https://www.instagram.com/explore/tags/hellobrother
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The latest version of DBA Bundle is now available:
https://www.dropbox.com/s/k96rl0f4g39ukih/DBA_BUNDLE5.tar?dl=0

Many awesome features added to the bundle including but not limited to:
  • Database monitoring script [dbalarm] can now send most of the alerts in HTML format.
  • Database Health Check script [dbdailychk] can now send the report in HTML format.
  • Added more options to the audit script zAngA_zAngA.sh to make the displayed data more focused.
  • Fixed minor bugs in many scripts along with enhancing the capability of the rest of scripts to work smoothly on different environments.

If you are new to the DBA BUNDLE please follow this link to read more about its features and how to use it:
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The HTMLization campaign continues on the DBA Bundle scripts as I received enormous requests asking to make the scripts send the email alerts/reports in HTML format, I understand that HTML format E-mails are more friendly to read especially from mobile phones.

The script will now automatically check if your machine has the required package "sendmail package" to send HTML E-mails  or not, if it already exists then it will format the content in HTML format and send you an HTML formatted E-mail, if the sendmail package is not there, it will revert back to the old fashion text email without having you miss any alert/report.

You can control the "enable/disable" of the automatic behavior of sending HTML emails by setting the following parameter under THRESHOLD section to whether ON or OFF :

# #########################
# THRESHOLDS:
# #########################
# Send an E-mail for each THRESHOLD if been reached:
# ADJUST the following THRESHOLD VALUES as per your requirements:

HTMLENABLE=Y            # Enable HTML Email Format                                      [DB]

....

The HTML received report will look like this: [Excerpt]



To download the script:
https://www.dropbox.com/s/w1dpw3iynphm07t/dbdailychk.sh?dl=0

To Read the complete description and how to use the script:
http://dba-tips.blogspot.com/2015/05/oracle-database-health-check-script.html
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Now dbalarm script for database and server monitoring has its vast majority E-mail alerts in HTML format.

For instance, instead of receiving a locked session E-mail alert in TEXT format like this:






























You will receive the E-mail alert in HTML format like below:
















The script will automatically check if "sendmail" package that sends the emails in HTML format is installed or not on your server, if installed you will receive the email alerts in HTML format, otherwise it will automatically revert to the text format version. you may think it's a small feature, but this feature alone cost me days of coding to make it work :-)

To download the script follow this link:
https://www.dropbox.com/s/a8p5q454dw01u53/dbalarm.sh?dl=0

If you are not yet familiar with dbalarm script and want to understand how to use it and what it does, please follow this link:
http://dba-tips.blogspot.com/2014/02/database-monitoring-script-for-ora-and.html

If you have any concern, suggestion or something didn't work for you please let me know.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
DBA_BUNDLE5 is now available for download:
https://www.dropbox.com/s/k96rl0f4g39ukih/DBA_BUNDLE5.tar?dl=0

It comes with the Following new features:
  • Database monitoring script [dbalarm] can perform the following additional tasks:
    • Monitor ASM instance Alert log and report the errors.
    • Monitor Grid Infrastructure Alert log and report errors and the following events:
      • Shutdown/Startup events.
      • Node eviction events.
      • Network IP conflict.
      • Heart Beat failures.
      • service failure events. 
    • Monitor Golden Gate log [If installed] for Errors and Process ABENDED events.
    •  Monitor dmesg log [Device Driver messages] for errors.
  • The bundle main "environment setup" script [aliases_DBA_BUNDLE] can locate and create aliases for:
    • ASM instance alert log. [asmalert alias]
    • Grid Infrastructure/Clusterware alert log. [raclog alias]
    • GRID_BASE and GRID_HOME locations.

  • Fixed bugs in many scripts and enhanced the capability of the rest of the scripts to make work smoothly on different environments.

Lots of features are currently being tested and will be published in the coming weeks. So stay enthusiastic about release 5.1

To learn more about DBA BUNDLE and its features, please follow this link:
http://dba-tips.blogspot.com/2014/02/oracle-database-administration-scripts.html
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview