I have written a powerful script to rebuild a table and its indexes on Oracle, although I'm reluctant to share scripts that deal directly with the data, but thought it will be a great help for DBAs if they use it wisely.
Before you use this script, please read the full post carefully to understand how it works.
First, I'm sharing this script in the hope that it will be helpful for you without any warranty, you have to test the script yourself on a test environment before running against production.
How it works:
The script can rebuild one table and its indexes at a time, once you enter the OWNER and TABLE_NAME it will do the following:
In A Nutshell, It will check the available options for rebuilding the table: Option 1: Will check if DBMS_REDEFINITION package can be used based on the database edition (Standard/Enterprise) and then take the user through the rest of the steps.
Option 2: If DBMS_REDEFINITION is not available in the current edition or the user didn't wish to proceed with DBMS_REDEFINITION, the script will move to ALTER TABLE MOVE option, if the database version is 12.2 or higher, the script will utilize "ALTER TABLE MOVE ONLINE" command which will rebuild the table with a negligible downtime, otherwise it will use "ALTER TABLE MOVE" command which will result in downtime on the table throughout the whole rebuild operation.
The following flowchart will explain the mechanism of the script in details: [I'm grateful to draw.io for making the drawing of this flowchart easy and free of cost for me]
If you are still confused, read the script prompted messages carefully and it will explain itself.
I had a complaint from one of the readers that dbalarm script for monitoring the DB is taking very long time to run against one RAC DB [18.104.22.168] especially in the part of reporting locked sessions on the database, when I dug deeper found that most of time is getting consumed by this statement:
select substr(s.INST_ID||'|'||s.OSUSER||'/'||s.USERNAME||'| '||s.sid||','||s.serial#||' |'||substr(s.MACHINE,1,22)||'|'||substr(s.MODULE,1,18),1,75)"I|OS/DB USER|SID,SER|MACHN|MOD" ,substr(s.status||'|'||round(w.WAIT_TIME_MICRO/1000000)||'|'||LAST_CALL_ET||'|'||to_char(LOGON_TIME,'ddMon HH24:MI'),1,34) "ST|WAITD|ACT_SINC|LOGIN" ,substr(w.event,1,24) "EVENT" ,s.PREV_SQL_ID||'|'||s.SQL_ID||'|'||round(w.TIME_REMAINING_MICRO/1000000) "PREV|CURRENT_SQL|REMAIN_SEC" from gv$session s, gv$session_wait w where s.sid in (select distinct FINAL_BLOCKING_SESSION from gv$session where FINAL_BLOCKING_SESSION is not null) and s.USERNAME is not null and s.sid=w.sid and s.FINAL_BLOCKING_SESSION is null /
While I was testing something on a 12.1 test database got this below error whenever I'm trying to execute specific admin commands:
SQL> drop user xx; drop user xx * ERROR at line 1: ORA-04088: error during execution of trigger 'SYS.XDB_PI_TRIG' ORA-00604: error occurred at recursive SQL level 1 ORA-06550: line 3, column 13: PLS-00302: component 'IS_VPD_ENABLED' must be declared ORA-06550: line 3, column 5: PL/SQL: Statement ignored
SQL> alter table bb move online compress;
alter table bb move online compress
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-04088: error during execution of trigger 'SYS.XDB_PI_TRIG'
ORA-00604: error occurred at recursive SQL level 2
ORA-06550: line 3, column 13:
PLS-00302: component 'IS_VPD_ENABLED' must be declared
ORA-06550: line 3, column 5:
PL/SQL: Statement ignored
The above was just a sample but the error with showing up with lots of admin commands!
I checked the trigger SYS.XDB_PI_TRIG which causing this error and it was already valid, so I decided to DISABLE it, and then admin commands ran as usual:
SQL> alter trigger SYS.XDB_PI_TRIG disable;
Above failing admin commands have run smoothly:
SQL> alter table bb move online compress;
Frankly speaking, I tried to google that error without any success, I didn't dig deeper, so I took the shortest/laziest way and disabled the root cause trigger as a dirty fix, the database where I disabled that trigger was a test DB, most probably one of my fancy test scenarios caused this issue to happen.
In case you have the same error on a Production Database I strongly recommend you to contact Oracle Support before disabling the above-mentioned trigger.
As you know there are two types of cloud services AWS provides (EC2 & RDS) while EC2 let you have an Operating System OS access to the DB machine including root access, RDS doesn't give you any kind of OS access. In RDS the master admin user which AWS provides to you has least admin privileges (neither has SYSDBA nor DBA role) as the database is supposed to be maintained by AWS. Though using this user to perform simple admin tasks like import a schema is a bit challenging on RDS, without an OS access you won't be able to use commands like impdp or imp the thing will force you to start exploring the alternative Oracle packages which can do this job from inside the DB, and yes Oracle has many built-in packages allow you to perform lots of tasks without the need to have an OS access.
Task Summary: I'll be exporting "EPS_OWNER" schema on an 22.214.171.124 database resides on AWS EC2 Linux instance and upload the export dump file to S3 bucket, then import the dump file into a 126.96.36.199 AWS RDS database to "EPS" schema.
Prerequisites: - An AWS S3 bucket must be created and Both Source EC2 and Target RDS must have RW access to it through a role. [S3 bucket is kind of shared storage between AWS cloud systems where you can upload/download the files to it, it will be used during this demo to share the export dump file between EC2 source instance and RDS target instance].
Step1: Export the schema on Source [EC2 instance]: I already have access to OS oracle user on the source EC2 instance so I used exportdata script to export EPS_OWNER schema, it generated the pre and post scripts to run before and after the import task on the target, but because I'll import to a schema with a different name so I adjusted those scripts by replacing the source schema name "EPS_OWNER" with the target schema name "EPS".
Step2: Upload the export file to S3 Bucket from Source [EC2 instance]: In case the bucket is not yet configured on the source machine you can use the following AWSCLI command to configure it providing the bucket's "Access Key" and "Secret Access Key":
# aws configure AWS Access Key ID [None]: AEFETFWNINTIHMLBWII5Q AWS Secret Access Key [None]: EdfefrgzA1+kEtfs2kg43RtdSv/Il/wwxtD6vthty Default region name [None]: Default output format [None]:
Upload the export dump files to the S3 bucket: # cd /backup # aws s3 cp EXPORT_eps_owner_STG_04-03-19.dmp s3://eps-bucket
Step2: Download the export file from the S3 Bucket to the Target [RDS instance]:
Remember, there is no OS access on RDS, so we will connect to the database using any tools such as SQL Developer using the RDS master user.
Use the AWS built-in package "rdsadmin.rdsadmin_s3_tasks" to download the dump file from S3 bucket to DATA_PUMP_DIR:
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3( p_bucket_name => 'eps-bucket', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;
Use this TASK_ID to monitor the download progress by running this statement:
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1554286165468-636.log'));
Once the download complete, query the downloaded files under DATA_PUMP_DIR using this query: select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;
Now the AWS related tasks are done, let's jump to the import part which is purely Oracle's.
Step3: Create the tablespace and the target schema user on the Target [RDS instance]:
In case the target user does not yet exist on the target RDS database, you can go ahead and create it along with its tablespace.
-- Create a tablespace: [Using Oracle Managed Files OMF] CREATE SMALLFILE TABLESPACE "TBS_EPS" DATAFILE SIZE 100M AUTOEXTEND ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;
-- Create the user: [Here the user as per my business requirements will be different than the original user on the Source DB] CREATE USER EPS IDENTIFIED BY "test123" DEFAULT TABLESPACE TBS_EPS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON TBS_EPS PROFILE DEFAULT; GRANT CREATE SESSION TO EPS; GRANT CREATE JOB TO EPS; GRANT CREATE PROCEDURE TO EPS; GRANT CREATE SEQUENCE TO EPS; GRANT CREATE TABLE TO EPS;
Step4: Import the dump file on the Target [RDS instance]:
The hashed parameters in gray color are there for reference:
--DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'EXPORT_eps_owner_STG_04-03-19.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file); In case you want to write the import operation log into a log file. --DBMS_DATAPUMP.METADATA_FILTER(hdnl,'SCHEMA_EXPR','IN (''EPS_OWNER'')');
In case the exported schema "EPS_OWNER" will be imported on an already existing schema with the same name, which is not my case here, in case you will use this parameter you should NOT use DBMS_DATAPUMP.METADATA_REMAP with it.
This tells the import package what to do if it finds the table already exists during the import, it accepts the following parameters:
SKIP --> Don't import anything on the already exist table.
TRUNCATE --> Truncate the already exist table and import the data.
APPEND --> Leave the currently exist data intact and load the data from the source next to them.
In case you are used wrong parameters or bad combination e.g. using METADATA_FILTER instead of METDATA_REMAP when you are importing to a schema with a different name, you will get a bunch of errors similar to the below cute unclear ones:
ORA-31627: API call succeeded but more information is available
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7143
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7137
ORA-06512: at line 7
Check the progress of the imported objects: select object_type,count(*) from dba_objects where owner='EPS' group by object_type;
Run the After Import script that generated by exportdata script at Step 1 after replacing the original exported schema name EPS_OWNER with the target imported schema name EPS.
Check the invalid objects: col object_name for a45 select object_name,object_type,status from dba_objects where owner='EPS' and status<>'VALID';
Step5: [Optional] Delete the dump file from the Target [RDS instance]:
"Hello brother" were the last words said by Daoud Nabi "the first victim of NZ Mosque shooting" to his killer before he is brutally gunned down! مرحبا أخى هى اخر كلمات قالها دواود نبى (رحمه الله عليه) "أول ضحايا حوادث إطلاق النار فى مساجد نيوزيلندا" ليستـقبل بها قاتله الذى أرداه قتيلا بوحشية.
The HTMLization campaign continues on the DBA Bundle scripts as I received enormous requests asking to make the scripts send the email alerts/reports in HTML format, I understand that HTML format E-mails are more friendly to read especially from mobile phones.
The script will now automatically check if your machine has the required package "sendmail package" to send HTML E-mails or not, if it already exists then it will format the content in HTML format and send you an HTML formatted E-mail, if the sendmail package is not there, it will revert back to the old fashion text email without having you miss any alert/report.
You can control the "enable/disable" of the automatic behavior of sending HTML emails by setting the following parameter under THRESHOLD section to whether ON or OFF :
# ######################### # THRESHOLDS: # ######################### # Send an E-mail for each THRESHOLD if been reached: # ADJUST the following THRESHOLD VALUES as per your requirements:
HTMLENABLE=Y # Enable HTML Email Format [DB]
The HTML received report will look like this: [Excerpt]
Now dbalarm script for database and server monitoring has its vast majority E-mail alerts in HTML format.
For instance, instead of receiving a locked session E-mail alert in TEXT format like this:
You will receive the E-mail alert in HTML format like below:
The script will automatically check if "sendmail" package that sends the emails in HTML format is installed or not on your server, if installed you will receive the email alerts in HTML format, otherwise it will automatically revert to the text format version. you may think it's a small feature, but this feature alone cost me days of coding to make it work :-)