Whatever topic has been discussed on this blog is my own finding and views, not necessary match with others. I strongly recommend you to do a test before you implement the piece of advice given at my blog.
As part of SSL configuration between WebLogic tier and DB tier, post sqlnet.ora settings and scan listener restart, we encountered an ORA-12545 error on a 3 node RAC database when using the SCAN listener.
We verified the LISTENER status and we could find all the services in READY status. Tried using VIP, it worked perfectly. So, the only issue was connecting through SCAN. We were fortunate to find a solution after a very quick search over the net. The MOS Doc 364855.1 explained the issue and provided the solution.
As part of the configuration, we set the LOCAL_LISTENER to an VIP hostname. After modifying the LOCAL_LISTENER string to point to VIP IP address, we could manage to connect through the SCAN.
Logically the VIP hostname should have worked, but, we noticed through many references the same issue.
References: Client Connection to RAC Intermittently Fails-ORA-12545 TNS: Host or Object Does not Exist (Doc ID 364855.1) https://ardentperf.com/2007/04/02/local_listener-and-ora-12545/ http://spotdba.blogspot.com/2012/08/error-ora-12545-while-trying-to-connect.html
For cost optimization and flexibility, its a good idea to go with Oracle Database backup cloud services. The Oracle Database Backup Cloud service can actually compliment with the existing backup strategy by providing an off-site storage location in the cloud.
The bck2cloud utility is easy to use, install and secures the data. It simplifies the RMAN backup/restore operations when using the Oracle Backup Cloud Services. Once you subscribe to the service, you simply need to download module from the OTN and install it on the database host. Once the module is successfully installed, you then need to configure the RMAN settings. With the utility, all data transfers strictly between the database instance and Oracle backup cloud account controlled by the customer.
Following operations are supported by the utility:
Cloud Storage operations
Password Encryption operation
Install Oracle Public Cloud (OPC) Module operation
Had an interesting situation configuring a Data Guard setup of 7TB Oracle 12c database at one of the clients. After a long and rounds of discussions, considering the constraints and limitations, agreed to use the traditional way of Data Guard setup. Backup the database, replicate the backups to DR site, create a standby controfile, restore the control file on standby host , do DB restore, recovery and configure the synchronization.
Restore failed due to unavailability of the data files location (same as production) on the DR host, because, all application tablespaces data files on PROD were not used OMF format, stored under ASM with .dbf extension. Due to this fact, only default tablespaces data files (which actually used OMF format in ASM) were restored. A PROD similar directory structure (using ALTER DISKGROUP ADD DIRECTORY) created and all the data files were restored successfully. The PROD similar directory (+DATA/PROD/DATAFILE) contained only the soft links and data files were created under the Standby Database SID directory. The entire process took more than 24 hrs, and a subsequent incremental backup was taken to fill the gap and have the data in sync using the roll-forward procedure.
As part of the roll-forward, a new standby controlfile was created on PROD and restored on the DR DB. When CATALOG START WITH command was issued, only few data files (system related) were able to CATALOG, while the soft link data files weren't. This stopped us doing the roll-forward recovery. Tried all possible option, nothing was materialized.
We then cleaned-up the DR database data files, to have a fresh restore. Used the db_convert_file_name parameter and start over the traditional DR configuration procedure. This time, all the data files were restored under standby SID directory, also, soft links for the non-standard OMF data files, were created in the same directory, unlike the first attempt. Once the restore done, successfully performed the roll-forward procedure to make the PROD and DR DBs in sync.
Conclusion: If you have non-standard OMF files, with .dbf extension under ASM, ensure you use the db_file_name_convert to avoid the mess.
Well, if you have any alternative, you are most welcome to suggest.
One of our DB teams from Europe highlighted the following issues while installation and configuring VM on Exadata X7.
"Today we started X7-2 Exadata installation. But we faced some issues. First problem is, when installing the cluster, resources are being up then suddenly, being down and there are some internal bugs for this problem:
Bug 27275655 - Running tcpdump on bondeth0 kills the Whitney LOM interface
Bug 27523644 - EXADATA X7-2 - CLIENT NETWORK STOPPED WORKING AFTER RUNNING STEP 11
Bug 27195117 - Finisar FTLX8574D3BCV-SU transceiver does not link up on X7-2 LOM
Bug 27130090: X7-2 126.96.36.199.0 - ARP TABLE OF VM NOT DYNAMICALLY POPULATED
These are internal bugs required to re-image the whole stack and use the image 188.8.131.52.0. Also, for the PXE(for re-imaging), there is another problem.
Without for those bugs when you are installing ExadataX7.
Had a very interesting and tricky scenario a few days ago. It was something I never encountered in my DBA career. Hence, thought of sharing the detail of there store here today.
During mid-day, an application team reported that they are suddenly getting an ORA-01033 when connecting to a 3 node RAC database (184.108.40.206). Quick basic validations reveals that the issue is happening only when connecting through SCAN IP, but, the VIP and direct (physical IP) connections were having no issues. So, it was clear that the issue is with the SCAN IPs. Verified all the configuration settings , network and firewall to ensure there is no issues accessing the SCAN IPS. To our surprise, everything was just perfectly okay. This really puzzles us.We also suspected the Data Guard configuration of this database, but, it wasn't the case either.
After a quick search over the internet, we come across of MOS Doc: ORA-01033 is Thrown When Connecting to the Database Via Scan Listener (Doc ID 1446132.1)
The issue was, one of the team members was restoring the database backup on a new host. The tricky part here is, the new host is part of the same network/subnet where the 3 node RAC database is running, and can access to SCAN IPs too. Perhaps the instance that is restoring is registered with the SCAN, and whenever a new connection request is made through SCAN, the connection was referred to an instance which in mount state (restoring). Hence, an ORA-1033 error is thrown.
Fix After reviving the note, the restore immediately stopped, and things got back to normal. Even nullifying the remote_listener parameter to de-register with SCAN would have been also worked in this case.
This issue can be prevented through configuring Class of Secure Transport (COST).
The probabilities of hitting the issue is very low, but, I felt its interesting and sharing it worth while.
ORA-01033 is Thrown When Connecting to the Database Via Scan Listener (Doc ID 1446132.1) NOTE:1340831.1 - Using Class of Secure Transport (COST) to Restrict Instance Registration in Oracle RAC NOTE:1453883.1 - Using Class of Secure Transport (COST) to Restrict Instance Registration
Though this didn't interrupt the data guard replication, the customer wanted to get-rid of the error message. Starting with v10g, this was an expected behavior to improve the switchover and failover. With v10g, when MRP is started, it will attempt to clear the online log files rather than at the time of role transition.
This is also an expected behavior when the log_file_name_convert parameter is not set on the standby database. But, in our case, the log_file_name_convert was not required as PRIMARY and STANDBY has the same directory structure (DISK GROUPS for DATA and RECO).
The workaround to get rid of the message, when there is no different in the directory structure, is simply to set some dummy values to the parameter, as shown in the below example:
SQL> ALTER SYSTEM SET log_file_name_convert='dummy','dummy';
After the parameter was set, the ORA message was no longer seen in the alert.log
References: ORA-19527 reported in Standby Database when starting Managed Recovery (Doc ID 352879.1) ORA-19527: Physical Standby Redo Log Must Be Renamed...during switchover (Doc ID 2194825.1)
I was recently involved in a project to migrate an Oracle ERP database to an Exadata server. The ERP database was a non-RAC Oracle 12cR1 running on RHEL with Oracle EBusiness suite 12.1.3.
The migration involved only migrating the ERP database from traditional storage/server technologies to Exadata machine. Upgrade from non-RAC database to RAC. The application remains on the same host. Though similar migration projects were successfully done earlier, this time, I faced a tough challenge resolving autoconfig issues. Here are the details of the issue and how I resolved the problem.
Downtime was not an issue, hence, I opted out for an RMAN backup and restore approach. In nutshell, the following was done:
Prepared the Oracle home for ERP on Exadata and applied all recommended patches on the home.
Performed all mandatory steps required for ERP Oracle Home.
After graceful shutdown of application tier, ERP database was stopped and started in MOUNT state to proceed with RMAN full backup (this is the approach I used, though, many different approaches can be achieved).
Copied the files to target (Exadata system) and complete the recovery procedure.
Through manual approach, converted the non-RAC database to RAC mode and completed the post migration steps.
Its mandatory to run catbundle script after the database migration to avoid issues like blank login page and issues changing the passwords. Some of you might defer with my advice, but, I faced this at multiple clients. So, I decided to make this as a practice right after migration.
The autoconfig on database nodes were successful. However, when autoconfig was executed on application tier, it completed with warnings. The autoconfig,log has the following errors:
Updating s_tnsmode to 'generateTNS' UpdateContext exited with status: 0 AC-50480: Internal error occurred: java.lang.Exception: Error while generating listener.ora. Error generating tnsnames.ora from the database, temporary tnsnames.ora will be generated using templates Instantiating Tools tnsnames.ora Tools tnsnames.ora instantiated Web tnsnames.ora instantiated
The NetServiceHandler.log reported the following error:
SQL*Plus: Release 10.1.0.5.0 - Production on Thu Feb 15 21:27:02 2018
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter value for 1: Enter value for 2: ERROR: ORA-12154: TNS:could not resolve the connect identifier specified
Unable to generate listener.ora from database Using default listener.ora file
After googling for the solution, the common workaround was the following:
Clean up the entries from fnd_nodes using the EXEC FND_CONC_CLONE.SETUP_CLEAN;
Run the autoconfig on Dbtier and run the autoconfig on App tier
Unfortunately this doesn't helped us in our case.
When I looked at the host names in the FND_NODES for database, the hostname was registered against the management hostname (Exadata has management, public IPs). Though client opened ports for Public hostname, they didn't opened the firewall against the management network. Though we have multiple IPs (management and public), the DB server can only take one hostname. So, if you are on Exadata, you need to ensure what is the hostname registered on the server, despite the different IPs. After allowing firewall to open the LISTENER ports through management IP, the autoconfig on App tier went successfully and we manage to connect the application.
The bottom line is, when you are on Exadata for ERP databases, you need to watchout for the hostnames and the ports opened against IPs from application server.
One of the customers was struggling to delete the backup data from the Oracle Cloud to avoid the charges. Oracle support requested to use the CloudBerry tools for easy management. Below is the excerpt from the CloudBerry website :
"With CloudBerry Backup you can use Oracle Storage Cloud Service as a cost-effective, remote backup solution for your enterprise data and applications. By backing up your data and applications to Oracle Storage Cloud Service, you can avoid large capital and operating expenditures in acquiring and maintaining storage hardware. By automating your backup routine to run at scheduled intervals, you can further reduce the operating cost of running your backup process. In the event of a disaster at your site, the data is safe in a remote location, and you can restore it quickly to your production systems"
CloudBerry offers below Oracle tools :
CloudBerry Managed Backup
Visit their website to explore more about these tools: