Follow Oracle DBA Scripts on Feedspot

Continue with Google
Continue with Facebook


Oracle Real Application clusters (RAC) allows multiple instances to access a single Oracle database. These instances often run on multiple nodes.

The Component Diagram with explanation shows all the components that inter-relate to each other and together make up THE “Oracle RAC architecture“.

RAC is the principal component for the Oracle Grid Architecture. It’s an option to the Oracle Database that provides High Availability (HA) and scalability to the Oracle Database without requiring any application changes.

From a system point of view, a group of independent servers defines a cluster.  These servers are inter-connected and cooperate as a single system.

Oracle RAC is heavily dependent  on the interconnect,  an efficient and high speed private network

Shared vs Dedicated Database components in a RAC architecture

As an Oracle DBA, you know that a standard database runs on a single instance. In the RAC architecture, the concept is different because some components are shared and others are dedicated for each instance.

Shared Database Components in Real Application Cluster Datafiles, Control Files and Flash Recovery Log

Control Files, Datafiles and Flash Recovery Log are shared accross all instances in a shared storage area (NAS, SAN).

Online Redo Logfile

In an Oracle RAC database, each instance must have at least two groups of redo log files.
Only one instance can write in it’s place but other instances can read during recovery and archiving. If an instance is down, log file switches by remaining instances can force the idle instance redo logs to be archived.

Dedicated Database components SGA

Each instance has its own SGA.
Though each instance has a local buffer cache, Cache Fusion causes sharing of cache and hence resolves the problems like concurrency.
Oracle Cache Fusion is the magic that works in the background to synchronize the cache of all the instances running on the different nodes. This synchronization allows multiple users sessions to execute concurrent transactions on either instance of the Oracle RAC Database without incurring stale reads.

New in Oracle19c RAC, Database Reliability Framework(DRF) attempts to detect any problems early before it can cause disruption in service. The concept is to detect problems and identify root cause.

Background processes

Each instance has its own set of background processes.

Archived Redo Logfile

Private to the instance but other instances will need access to all required archive logs if a media recovery is required by the system.

Alert Log and Trace Files

These files or private to each instance. Other instances never read or write to those files.


On a single instance, you create the Oracle database home on the same server than the Database. As a shared file system is mandatory for RAC, you can use it to install your  Oracle Home.  In this case, you will create it on an Oracle ASM Cluster File System (ACFS). The Oracle Home will be available on all nodes of the cluster.

Installing a Shared Oracle database home has many management advantages in a multi node cluster environment: Out-of-place patching with ACFS snapshots significantly improves the patching process. It eliminates database downtime when coupled with the on-line migration feature of RAC and minimized downtime otherwise.

On Node-Local Oracle Database Homes; The advantage it provides is the ability to apply certain one-off patches in a rolling upgrade fashion.

Specific RAC components

The major components of a Oracle RAC system are:

  • Shared disk system
  • Oracle Clusterware
  • Cluster high-speed Interconnect
Shared disk system

The shared storage provides the concurrent access by all the cluster nodes to the storage array.
Oracle provides a very flexible and high performing shared storage File System, “Automatic Storage Management” or ASM.

ASM is Oracle’s recommended storage management solution that provides an alternative to conventional volume managers, file systems, and raw devices. ASM uses disk groups to store datafiles; an ASM disk group is a collection of disks that ASM manages as a unit. The ASM volume manager functionality provides flexible server-based mirroring options. ASM also uses the Oracle Managed Files (OMF) feature to simplify database file management.

When using Oracle RAC, it’s a good idea to deploy Oracle Flex ASM.
This feature enables an Oracle ASM instance to run on a separate physical server from the database servers. With this deployment, larger clusters of Oracle ASM instances can support more database clients while reducing the Oracle ASM footprint for the overall system.

Clusterware, CRS and OCR

Each of the instances in the cluster configuration communicates with other instances by using the cluster manager or clusterware.
Oracle Clusterware is the technology which unifies servers in a server farm to form a cluster.
Oracle Clusterware is a complete, free-of-charge clustering solution that can be used with Oracle RAC, RAC One Node and even Single instance Oracle databases. It is shipped with Oracle Grid Infrastructure (GI). GI is a suite of software packages which includes Oracle Automatic Storage Management (ASM) for databases and Oracle Automatic Storage Cloud File System (ACFS).

New in 19c is support for bidirectional ACFS snapshots and even better integration with Oracle Data Guard when using ACFS to store the datafiles.

The Clusterware software is run by the Cluster Ready Services (CRS) using the Oracle Cluster Registry (OCR). The OCR records and maintains the cluster and node membership information and the voting disk which acts as a breaker during communication failures. Consistent heartbeat information travels across the interconnect to the voting disk when the cluster is running.

Oracle Clusterware 19c enhances the new deployment options for easier management and deployments of large pool of clusters. This new architecture is called Oracle Cluster Domain. It of a single Domain Services Cluster (DSC) and one or more Member Clusters. DSC provides many services which can be utilized by the four new types of Member Clusters.

Cluster high-speed Interconnect

In an Oracle Real Application Clusters (RAC) environment, all the instances or servers communicate with each other using high-speed interconnect on a private network. The “Interconnect” enables all the instances to be in sync in accessing the data.

You can use the Oracle Enterprise Manager Interconnects page to monitor the Oracle Clusterware environment. The Interconnects page shows the public and private interfaces on the cluster and the load contributed by database instances on the interconnect.

The CLUSTER_INTERCONNECTS parameter can be used to override the default interconnect with a preferred cluster traffic network. This parameter is useful in Data Warehouse systems that have reduced availability requirements and high interconnect bandwidth demands.

What are the benefits of integrating the Oracle cluster architecture?
  • Your application is more scalable ; if you need more power, just add a new node.
  • You can also reduce the total cost of ownership for the infrastructure by providing a scalable system using low-cost commodity hardware
  • In case of a problem, you have the ability to fail over from one node to another
  • You can increase throughput on demand for cluster-aware applications.  One more time, increase cluster resources by adding servers to your cluster
  • Increase throughput for cluster-aware applications by enabling the applications to run on all of the nodes in a cluster or just in a selection of nodes
  • You can easily program the startup of applications in a planned order. In that way you ensure dependent processes are started in the correct sequence
  • Ability to monitor processes and restart them if they stop
  • With the RAC architecture, you eliminate your Single Point of Failure (SPOF) and unplanned downtime due to hardware or software malfunctions
  • And finally, you can reduce or eliminate planned downtime for software maintenance

If you want more information than this tutorial you can read the official overview for Oracle 19c: https://www.oracle.com/technetwork/database/options/clustering/rac-twp-overview-5303704.pdf

Thank you for reading this blog!

Author: Vincent Fenoll – Oracle DBA

Cet article RAC architecture concepts est apparu en premier sur Oracle Database SQL scripts.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Maybe you are looking for a particular phrase in log files or maybe you are a programmer and need to find some code that is inside many different code files?

Here is how you can find and replace a string in several text files.

For this example, the Oracle DBA wants to replace  strings NSR* in all rman scripts:


for base in /oracle/scripts/rman/rcv/*.rcv
TMP=$(mktemp test.XXXXXX)
sed 's/NSR_SERVER=networker-hme0, NSR_DATA_VOLUME_POOL=BU UNT FULL/NSR_SERVER=freppax-laaf02h, NSR_DATA_VOLUME_POOL=FullHP/g' $base > "$TMP" && mv "$TMP" $base

export PS1="\$ORACLE_SID $LOGNAME@$(hostname):\${PWD}> "
stty erase ^H
export HISTSIZE=100
export EDITOR=vi
set -o vi
alias ll="ls -altr"
umask 022

Cet article Find and replace a string in several text files est apparu en premier sur Oracle Database SQL scripts.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Hidden parameters in Oracle always start with an underscore.

It is not possible for the DBA to see the hidden parameters with the SQL*Plus command “show parameter” or by querying v$parameter. Unless the hidden parameter is explicitly set in spfile/init.ora file.

How Can I list all Hidden Parameters set in The database?

As they are explicitely set in the init file, you can create a report that shows all the hidden parameters using the v$parameter view.
The following sql statemant lists undocumented parameters but can also be used to list documented parameters, that can be set in the spfile or init.ora file:

col name for A45
set lines 120
col value for A40
set pagesize 100
select name, value from v$parameter where name like '\_%' escape '\';
How can I list all hidden parameters available?

If you want to list all hidden parameters available for your version along with a description:

substr(ksppinm,1,1) = '_';
How can I set the value of a hidden parameter?

You can change a hidden parameter, the same way as you would any other init.ora parameters but you need to put double quotes for the parameter name:

alter system set "_pga_max_size"=5G scope=spfile sid='*';

A Good DBA needs to know what hidden parameters are set in the database and their values. Especially during upgrade, database migrations or performance tuning problems.

Oracle has hundreds of initialization parameters, which are hidden and undocumented. Many savvy Oracle professionals are known to commonly adjust the hidden parameters to improve the overall performance of their systems.

Disclaimer: It is not recommended to change hidden parameter without consent of Oracle Support since Oracle can make your system unsupported. You can be responsible for data corruption, performance degradation because of bad SQL plans or other problem. the undocumented init parameters are only used in emergencies or to fix a bug. Some of these parameters are Operating system specific and used in unusual recovery situations. Hence, these parameters should be manipulated carefully and preferably not without recommendation from an Oracle Database Administrator.

Author: Vincent Fenoll Oracle DBA

Cet article View Oracle hidden parameters est apparu en premier sur Oracle Database SQL scripts.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Oracle DBA wants to run Health Checks command with Health Monitor upon his database.

With Oracle 12c/18c, these checks can be done  on a regular basis daily/monthly:
– DB Structure Integrity Check
– CF Block Integrity Check
– Data Block Integrity Check
– Redo Integrity Check
– Transaction Integrity Check
– Undo Segment Integrity Check
– Dictionary Integrity Check
– ASM Allocation Check

Perhaps you have datafile, dictionary, block, undo, redo, or another corruption in your database? You might actually be running just fine and not even know it.

Oracle Database 12c/18c includes a framework called Health Monitor for running diagnostic checks on your database.

How to run a health check on the Oracle database?
   DBMS_HM.run_check ('Dictionary Integrity Check', 'report_dictionary_integrity');


   DBMS_HM.RUN_CHECK (check_name     => 'Transaction Integrity Check',
                      run_name       => 'my_transaction_run',
                      input_params   => 'TXN_ID=22.87.1');

Viewing the first report in text format with DBMS_HM (HTML & XML format are also available):

SET LONG 100000
SELECT DBMS_HM.get_run_report ('report_dictionary_integrity') FROM DUAL;

Listing all the Health Check executed (Health Monitor View):

SELECT run_id,
  FROM v$hm_run;
Viewing the list of checks that can be done on your database
 SELECT name
  FROM v$hm_check
 WHERE internal_check = 'N';

Health checks accept input parameters, some are mandatory while others are optional.

Displaying parameter information for all health checks
  SELECT c.name check_name,
         p.name parameter_name,
    FROM v$hm_check_param p, v$hm_check c
   WHERE p.check_id = c.id AND c.internal_check = 'N'
  ORDER BY c.name;

Periodic database health checks help keep your database running smoothly without corruption and prevent more serious conditions from developing later.

Health Monitor checks and examine the several parts of the Oracle database stack. This tool detects data dictionary corruptions, datafile corruptions. It will check logical or physical logical block corruptions and rollback (undo) or redo corruptions.

The health checks generate reports of their findings and, in many cases, recommendations for resolving problems.

For a Healthy database!

Vincent Fenoll – Oracle OCP Database administrator in Montreal

Cet article Ultimate Database Health Check (DBMS_HM) est apparu en premier sur Oracle Database SQL scripts.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Sometimes the Oracle DBA has to I delete a Unix user account under Linux operating systems including the home directory.

How do I expire, delete or remove a user’s access from my server?

Deleting user account in Linux is an administrative task to remove user login credentials from system configuration files such as /etc/passwd, /etc/shadow and files which are owned by that particular user from the Unix server.

These command must be run as root user on Linux.

# Just Lock the password
usermod -L myusername
# Just Expire the account
chage -E0 myusername
# Delete the account. userdel is a low level utility for removing users. On Debian, administrators should usually use deluser instead.

# Be careful, User deletion is irreversible!
userdel myusername

-- Use these 2 options to delete that user's home directory and the spool of mails
-r : Remove Unix user account including home directory and mail spool
-f : Delete Linuxuser account with force removal of files

The userdel command modifies the following system account files:
/etc/group, /etc/login.defs, /etc/shadow, /etc/subgid and /etc/subuid.

How to clean associated objects?

If you want to clean other objects like cron jobs, files, print jobs; you will have to do it manually like that.

How to clean cron table
crontab -r -u myusername

How to clean print jobs
lprm myusername

How to change the owner of files owned by myusername
find / -user myusername -exec chown newUserName:newGroupName {} \;

Author: Vincent Fenoll, Oracle DBA in Montreal

Cet article Lock, expire and delete Linux account est apparu en premier sur Oracle Database SQL scripts.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This morning, the Cloud Control and other applications are down.

Hard for the customer :(  but as an Oracle DBA I love problems in production so it’s a great day  today!

First, I tried to edit a file with vi but a swap error raised:  “E297: Write error in swap file

Another read-only error with SQL*Plus and access to the audit file trail:

sqlplus / as sysdba
Copyright (c) 1982, 2018, Oracle. All rights reserved.

ORA-09925: Unable to create audit trail file Linux-x86_64 Error: 30: Read-only file system
Additional information: 9925

It seems some of my file systems are read-only!

How to check if the file system are Read/Write or read-only?
cat /proc/mounts
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
devtmpfs /dev devtmpfs rw,relatime,size=8068884k,nr_inodes=2017221,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
/dev/sda2 / ext4 ro,relatime,data=ordered 0 0
/dev/sda1 /boot ext4 rw,relatime,data=ordered 0 0
/dev/sda7 /u01 ext4 ro,relatime,data=ordered 0 0
/dev/sda3 /tmp ext4 ro,relatime,data=ordered 0 0
/dev/sda5 /var ext4 ro,relatime,data=ordered 0 0

==> KO: Because we can see “ext4 ro” ro = Read only!

Infrastructure team informed us that the NAS was in trouble.
The reboot of the server solved the problem.

We can also verify other things, like free space and inodes space.

I can also check free space

df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.5G 5.6G 3.4G 63% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sda1 477M 158M 291M 36% /boot
/dev/sda7 91G 78G 8.8G 90% /u01
/dev/sda3 5.7G 26M 5.4G 1% /tmp
/dev/sda5 5.7G 267M 5.1G 5% /var

==> Everything is OK

In case you have a space problem you can list big files with this command:

find . -type f -size +50M
(ex: file > 50Mb)

'c' for bytes 'w' for two-byte words 'k' for Kilobytes (units of 1024 bytes) 'M' for Megabytes (units of 1048576 bytes) 'G' for Gigabytes (units of 1073741824 bytes)
  Finally, I can check free inodes space 

An inode is used for each file on the filesystem. So running out of inodes generally means you’ve got a lot of small files laying around. 

 If you are very unlucky you have used about 100% of all inodes.  This bash command may help you:

df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda2 640848 73501 567347 12% /
tmpfs 2021073 2 2021071 1% /dev/shm
/dev/sda1 128016 60 127956 1% /boot
/dev/sda7 6045696 244708 5800988 5% /u01
/dev/sda3 384272 3009 381263 1% /tmp
/dev/sda5 384272 2088 382184 1% /var

We don’t have more than 12% inodes used (for /). I don’t have any problems with the number of inodes used.

In case you have a lot of inodes used, you can list the directories sorted with the number of files with this command:

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

Author: Vincent Fenoll – Oracle DBA

Cet article Unable to create audit trail file or Read-only file system est apparu en premier sur Oracle Database SQL scripts.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post provides help troubleshooting a standby database (dataguard). This article covers the following problems:

  • Troubleshooting Log transport services
  • Troubleshooting Redo Apply services
  • Troubleshooting SQL Apply services
  • Common Problems
  • Log File Destination Failures
  • Handling Logical Standby Database Failures
  • Problems Switching Over to a Standby Database
  • What to Do If SQL Apply Stops
  • Troubleshooting a Logical Standby Database

These SQL commands are compatible Oracle 18c, 12c, 11g and 10g.

Determine if archive logs of your DG environment are successfully being transferred to the standby

Run the following query:

select dest_id,status,error from v$archive_dest
where target=’STANDBY’;

If all remote destinations have a status of VALID then proceed to next step.
Else proceed to Troubleshooting Log Transport Services.

How many archives need to be transfered from Primary?

Connected on primary:

select thread#, min(sequence#), min(min_date), count(*) from (
select thread#, sequence#, count(*), max(first_time) min_date
from v$archived_log
where first_time > sysdate – 2
group by thread#, sequence#
having count(*) <2
order by 1,2
group by thread#;

Determine if the standby is a Physical standby or a Logical Standby

To determine the standby type run the following query on the standby:

select database_role from v$database;

If the standby is a physical standby then proceed to Troubleshooting Redo
Apply. Else proceed to Troubleshooting Logical Apply.

Troubleshooting Log transport services Verify that the primary database is in archive log mode and has automatic archiving enabled

select log_mode from v$database;

or in SQL*Plus

archive log list 

Verify that sufficient space exist in all archive destinations

The following query can be used to determine all local and mandatory destinations that need to be checked:

select dest_id,destination from v$archive_dest
where schedule=’ACTIVE’
and (binding=’MANDATORY’ or target=’PRIMARY’);

Determine if the last log switch to any remote destinations resulted in an

select dest_id,status,error from v$archive_dest
where target=’STANDBY’;

Address any errors that are returned in the error column. Perform a log
switch and re-query to determine if the issue has been resolved.

Determine if any error conditions have been reached

Query the v$dataguard_status view:

select message, to_char(timestamp,’HH:MI:SS’) timestamp
from v$dataguard_status
where severity in (‘Error’,’Fatal’)
order by timestamp

Gather information about how the remote destinations are performing the

select dest_id,archiver,transmit_mode,affirm,net_timeout,delay_mins,async_blocs
from v$archive_dest where target=’STANDBY’

Determine the current sequence number, the last sequence archived, and the last sequence applied to a standby

Perhaps, the most important query to troubleshoot a stsandby configuration:

select ads.dest_id,
max(sequence#) “Current Sequence”,
max(log_sequence) “Last Archived”,
max(applied_seq#) “Last Sequence Applied”
from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads
where ad.dest_id=al.dest_id
and al.dest_id=ads.dest_id
group by ads.dest_id

If you are remotely archiving using the LGWR process then the archived
sequence should be one higher than the current sequence. If remotely
archiving using the ARCH process then the archived sequence should be equal to the current sequence. The applied sequence information is updated at
log switch time.

Troubleshooting Redo Apply services Verify that the last sequence# received and the last sequence# applied to
standby database

select max(al.sequence#) “Last Seq Recieved”,
max(lh.sequence#) “Last Seq Applied”
from v$archived_log al, v$log_history lh

If the two numbers are the same then the standby has applied all redo sent
by the primary. If the numbers differ by more than 1 then proceed to next step.

Verify that the standby is in the mounted state

select open_mode from v$database;

Determine if there is an archive gap on your physical standby database

By querying the V$ARCHIVE_GAP view as shown in the following query:

select * from v$archive_gap;

The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking redo apply from continuing.

After resolving the identified gap and starting redo apply, query the
V$ARCHIVE_GAP fixed view again on the physical standby database to
determine the next gap sequence, if there is one. Repeat this process
until there are no more gaps.

If v$archive_gap view does’nt exists:

WITH prod as (select max(sequence#) as seq from v_$archived_log where RESETLOGS_TIME = (select RESETLOGS_TIME from v_$database)), stby as (select max(sequence#) as seq,dest_id dest_id from v_$archived_log where first_change# > (select resetlogs_change# from v_$database) and applied = ‘YES’ and dest_id in (1,2) group by dest_id) select prod.seq-stby.seq,stby.dest_id from prod, stby

Verify that managed recovery is running

select process,status from v$managed_standby;

When managed recovery is running you will see an MRP process. If you do not see an MRP process then start managed recovery by issuing the following command:

recover managed standby database disconnect;

Some possible statuses for the MRP are listed below:

ERROR – This means that the process has failed. See the alert log or v$dataguard_status for further information.

WAIT_FOR_LOG – Process is waiting for the archived redo log to be completed. Switch an archive log on the primary and query v$managed_standby to see if the status changes to APPLYING_LOG.

WAIT_FOR_GAP – Process is waiting for the archive gap to be resolved. Review the alert log to see if FAL_SERVER has been called to resolve the gap.

APPLYING_LOG – Process is applying the archived redo log to the standby database.à

Troubleshooting SQL Apply services Verify that log apply services on the standby are currently running.

To verify that logical apply is currently available to apply changes perform the following query:


When querying the V$LOGSTDBY view, pay special attention to the HIGH_SCN column. This is an activity indicator. As long as it is changing each time you query the V$LOGSTDBY view, progress is being made. The STATUS column gives a text description of the current activity.

If the query against V$LOGSTDBY returns no rows then logical apply is not running. Start logical apply by issuing the following statement:

SQL> alter database start logical standby apply;

If the query against V$LOGSTDBY continues to return no rows then proceed to  next step.

Determine if there is an archive gap in your dataguard configuration

Query the DBA_LOGSTDBY_LOG view on the logical standby database.


Copy the missing logs to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database. For example:

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE ‘/u01/oradata/arch/myarc_57.arc’;

After you register these logs on the logical standby database, you can restart log apply services. The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL apply operations from continuing.

After resolving the identified gap and starting log apply services, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

Verify iflogical apply is receiving errors while performing apply operations.

Log apply services cannot apply unsupported DML statements, DDL statements and Oracle supplied packages to a logical standby database in SQL apply mode. When an unsupported statement or package is encountered, SQL apply operations stop. To determine if SQL apply has stopped due to errors you should query the DBA_LOGSTDBY_EVENTS view. When querying the view, select the columns in order by EVENT_TIME. This ordering ensures that a shutdown
failure appears last in the view. For example:


If an error requiring database management occurred (such as adding a tablespace, datafile, or running out of space in a tablespace), then you can fix the problem manually and resume SQL apply.

If an error occurred because a SQL statement was entered incorrectly, conflicted with an existing object, or violated a constraint then enter the correct SQL statement and use the DBMS_LOGSTDBY.SKIP_TRANSACTION procedure to ensure that the incorrect statement is ignored the next time SQL apply operations are run.

Query DBA_LOGSTDBY_PROGRESS to verify that log apply services is progressing

The DBA_LOGSTDBY_PROGRESS view describes the progress of SQL apply operations on the logical standby databases. For example:


The APPLIED_SCN indicates that committed transactions at or below that SCN have been applied. The NEWEST_SCN is the maximum SCN to which data could be applied if no more logs were received. This is usually the MAX(NEXT_CHANGE#)-1
from DBA_LOGSTDBY_LOG when there are no gaps in the list. When the value of NEWEST_SCN and APPLIED_SCN are the equal then all available changes have been applied. If you APPLIED_SCN is below NEWEST_SCN and is increasing then
SQL apply is currently processing changes.

Verify that the table that is not receiving rows is not listed in the DBA_LOGSTDBY_UNSUPPORTED.

The DBA_LOGSTDBY_USUPPORTED view lists all of the tables that contain datatypes not supported by logical standby databases in the current release. These tables are not maintained (will not have DML applied) by the logical
standby database. Query this view on the primary database to ensure that those tables necessary for critical applications are not in this list. If the primary database includes unsupported tables that are critical, consider using a physical standby database.

Author: Oracle Corporation modified by Vincent Fenoll

Cet article DATA GUARD TROUBLESHOOTING est apparu en premier sur Oracle Database SQL scripts.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Enterprise Manager / Cloud Control 13c raised this event:

EM Event: Critical:my-db.oracle-scripts.net – Checker run found 1 new persistent data failures.

By default the database runs the Health Check periodically to find:

  • File corruptions,
  • Physical and logical block corruptions,
  • Undo or redo corruptions,
  • data dictionary corruptions

If any failures are detected then a message is logged to the alert log.and Enterprise Manager can raise it.

To identify this failures you can follow these steps:

1- Check the list of health checks executed:

SQL> select run_id,name,check_name,start_time,end_time,status from v$hm_run;

148741 HM_RUN_148741 DB Structure Integrity Check

2- Get the report of that health check and find the failure:

SET LONG 100000 lines 256 LONGCHUNKSIZE 1000 PAGESIZE 1000

With ‘HM_RUN_148741’ is a value from the column “Name” retrieved in the first query.

Basic Run Information
Run Name : HM_RUN_148741
Run Id : 148741
Check Name : DB Structure Integrity Check
Start Time : 2019-02-07 06:54:41.409009 -05:00
End Time : 2019-02-07 06:54:41.551557 -05:00
Error Encountered : 0
Source Incident Id : 0
Number of Incidents Created : 0

Input Paramters for the Run
Run Findings And Recommendations
Finding Name : Control File needs recovery
Finding ID : 148742
Status : OPEN
Priority : CRITICAL
Message : Control file needs media recovery
Message : Database cannot be opened

Have a nice day!

Vincent Fenoll – Oracle DBA Montreal
Compatible: Oracle 18c, 12c, 11.1

Cet article Found 1 new persistent data failures – Cloud Control Event/error est apparu en premier sur Oracle Database SQL scripts.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The oracle DBA can use these 2 scripts to generate DDL statements for a user with their roles, system and object privileges.

For Oracle 18c / 12c / 11g / 10g:

clear screen
accept uname prompt 'Enter User Name : '
accept outfile prompt  ' Output filename : '

spool &&outfile..gen


   DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
   DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'PRETTY', true);

SELECT dbms_metadata.get_ddl('USER','&&uname') FROM dual;

spool off

More information for the package dbms_metadata, function get_ddl user in the official Oracle 18c/12c documentation :

For Oracle <10 (runs well too with 10g, 11g, 12.2 and 18c):

clear screen

accept uname prompt 'Display the DDL for this specific user: '
accept outfile prompt  ' Output filename : '

col username noprint
col lne newline


spool &&outfile..gen

prompt  -- generate user ddl
SELECT username, 'CREATE USER '||username||' '||
              'IDENTIFIED BY VALUES '''||password||''' ') lne,
       'DEFAULT TABLESPACE '||default_tablespace lne,
       'TEMPORARY TABLESPACE '||temporary_tablespace||';' lne
    OR UPPER('&&uname') IS NULL

SELECT username, 'ALTER USER '||username||' QUOTA '||
       ||' ON '||tablespace_name||';' lne
    OR UPPER('&&uname') IS NULL

col grantee noprint

select grantee, granted_role granted_priv,
       'GRANT '||granted_role||' to '||grantee||
  from dba_role_privs
 where grantee like upper('%&&uname%')
select grantee, privilege granted_priv,
       'GRANT '||privilege||' to '||grantee||
  from dba_sys_privs
 where grantee like upper('%&&uname%')
 order by 1, 2;

spool off

Another use of this procedure is to copy a user account from one Oracle instance to another. With the same password, grants and roles without using the expdp/impdp tools.

Another method to retreive the Data Description Language for an Oracle user with all roles and Privileges:

With datapump (impdp) you can use the parameter sqlfile=My_file.sql you can easily get DDL from dumpfile:

Author: Vincent Fenoll
Compatibility: Oracle 18c, 12c, 11g

Cet article Generate user DDL with dbms_metadata.get_ddl user est apparu en premier sur Oracle Database SQL scripts.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

How can I cleanup old (orphaned) datapump jobs in DBA_DATAPUMP_JOBS ?

Cause: In many cases you have stop Oracle data pump jobs, shutdown database during export/import or use undocumented parameter KEEP_MASTER=Y. In these cases the master table remains in the database and it’s better to delete them.

Below is a step-by-step instruction on how to do this.

Step 1. Determine in SQL*Plus if Data Pump jobs exist in the dictionary

Identify these jobs and ensure that the listed jobs in dba_datapump_jobs are not export/import Data Pump jobs that are active: status should be ‘NOT RUNNING’ and not attached to a session:

SET lines 150
COL owner_name FORMAT a10
COL job_name FORMAT a20
COL operation FORMAT a10

SELECT owner_name, job_name, operation
FROM dba_datapump_jobs where state='NOT RUNNING' and attached_sessions=0;
Step 2: Drop the master tables
set head off
SELECT 'drop table ' || owner_name || '.' || job_name || ';'
FROM dba_datapump_jobs WHERE state='NOT RUNNING' and attached_sessions=0;

Execute the generated script.

Step 3: Identify orphan DataPump external tables

Check  and drop external tables created for datapump jobs with select  object_name, created from dba_objects where object_name like ‘ET$%’

Step 4: Purge recycle bin

If using recycling bin:
SELECT ‘purge table ‘ || owner_name || ‘.’ || ‘”‘ || job_name || ‘”;’
FROM dba_datapump_jobs WHERE state=’NOT RUNNING’ and attached_sessions=0;

Step 8: Confirm that the job has been removed

Run sql statement from step 1.

Now you should be able to run your script with the same job name without any issues.

Hope this post will help!

Author: Vincent Fenoll – Oracle DBA

Compatibility:  Oracle Database – Standard/Enterprise Edition – Version 10g to 18c [Release 10.1 to 12.2/18]

Cet article How to cleanup orphaned datapump jobs in Oracle est apparu en premier sur Oracle Database SQL scripts.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview