Loading...

Follow Dutra DBA Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Dutra DBA Blog by André Luiz Dutra Ontalba - 2d ago

Na semana passada, foi anunciado que a Oracle e a Microsoft criaram uma conexão nuvem-nuvem entre o Oracle Cloud Infrastructure e o Microsoft Azure em determinadas regiões.

Essa conexão permite que você configure cargas de trabalho entre nuvens sem tráfego entre as nuvens que passam pela Internet.

Serviços de Interconexão OCI e Azure:

Limitada à região do Azure Leste dos EUA (eastus) e à região OCI Ashburn (us-ashburn-1) .

O local de peering ExpressRoute está próximo ou no mesmo local de peering que o OCI FastConnect.

O lado da identidade é sua integração comum e bem conhecida entre o IDCS e o Microsoft Active Directory.

Fornece baixa latência e alta conectividade entre nuvens

Network Peering possível entre o Azure e o OCI.

Aplicativo de várias camadas pode ser particionado para executar o banco de dados no OCI e no aplicativo no Azure

Cross-Connect pode ser estabelecido circuito ExpressRoute no Azure e Fastconnect em OCI.

Tráfego entre os dois provedores em uma rede privada.

O tráfego de rede pode ser controlado usando a Lista de Segurança (OCI) e os Grupos de Segurança de Rede (Azure)

Last week, it was announced that Oracle and Microsoft have created a cloud-to-cloud connection between Oracle Cloud Infrastructure and Microsoft Azure in certain regions.

This connection allows you to configure workloads between clouds without traffic between the clouds that pass through the Internet.

OCI and Azure Interconnect services:

Limited to Azure East US (eastus) region and the OCI Ashburn (us-ashburn-1) region as of now.

ExpressRoute peering location is in proximity to or in the same peering location as the OCI FastConnect.

Identity side its common and well known integration between IDCS and Microsoft Active Directory.

Provides low latency and high throughput cross-cloud connectivity

Network Peering possible between Azure and OCI.

Multi-tier application can be partitioned to run DB on OCI and Application on Azure

Cross-Connect can be established ExpressRoute circuit in Azure and Fastconnect on OCI.

Traffic between the 2 providers over a private network.

Network traffic can be controlled using Security List (OCI) and Network Security Groups (Azure)

Link about official documentation: https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/azure.htm

Hope this helps. See you !!! André  Ontalba

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

O post Interconnect Oracle Cloud and Microsoft Azure apareceu primeiro em Blog DBA Dutra.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Dutra DBA Blog by André Luiz Dutra Ontalba - 1w ago

As we saw in the last article on installing DataGuard in Oracle 19C, we will now see how to monitor Data Guard.

— This script is to be run on the Standby of a Data Guard Physical Standby Site

set echo off
set feedback off
column timecol new_value tstamp
column spool_extension new_value suffix
select to_char(sysdate,’Mondd_hhmi’) timecol from sys.dual;
column output new_value dbname
select value || ‘_’ output from v$parameter where name = ‘db_name’;

— Output the results to this file

spool dg_Standby_diag_&&dbname&&tstamp
set lines 132
set pagesize 500
set numformat 999999999999999
set trim on
set trims on

— Get the current Date

set feedback on
select systimestamp from dual;

— Standby Site Details
set heading off
set feedback off
select ‘Standby Site Details’ from dual;
select ‘********************’ from dual;
set heading on
set feedback on

col db_unique_name format a15
col flashb_on format a10

select DB_UNIQUE_NAME,DATABASE_ROLE DB_ROLE,FORCE_LOGGING F_LOG,FLASHBACK_ON FLASHB_ON,LOG_MODE,OPEN_MODE,
GUARD_STATUS GUARD,PROTECTION_MODE PROT_MODE
from v$database;

— Current SCN – this value on the primary and standby sites where real time apply is in place should be nearly the same

select DB_UNIQUE_NAME,SWITCHOVER_STATUS,CURRENT_SCN from v$database;

— Incarnation Information

set heading off
set feedback off
select ‘Incarnation Destination Configuration’ from dual;
select ‘*************************************’ from dual;
set heading on
set feedback on

select INCARNATION# INC#, RESETLOGS_CHANGE# RS_CHANGE#, RESETLOGS_TIME, PRIOR_RESETLOGS_CHANGE# PRIOR_RS_CHANGE#, STATUS,FLASHBACK_DATABASE_ALLOWED FB_OK from v$database_incarnation;

set heading off
set feedback off
select ‘Archive Destination Configuration’ from dual;
select ‘*********************************’ from dual;
set heading on
set feedback on


— Current Archive Locations

column host_name format a30 tru
column version format a10 tru
select INSTANCE_NAME,HOST_NAME,VERSION,ARCHIVER from v$instance;

column destination format a35 wrap
column process format a7
column archiver format a8
column dest_id format 99999999

select DEST_ID,DESTINATION,STATUS,TARGET,ARCHIVER,PROCESS,REGISTER,TRANSMIT_MODE
from v$archive_dest
where DESTINATION IS NOT NULL;

column name format a22
column value format a100
select NAME,VALUE from v$parameter where NAME like ‘log_archive_dest%’ and upper(VALUE) like ‘SERVICE%’;

set heading off
set feedback off
select ‘Archive Destination Errors’ from dual;
select ‘**************************’ from dual;
set heading on
set feedback on

column error format a55 tru
select DEST_ID,STATUS,ERROR from v$archive_dest
where DESTINATION IS NOT NULL;

column message format a80
select MESSAGE, TIMESTAMP
from v$dataguard_status
where SEVERITY in (‘Error’,’Fatal’)
order by TIMESTAMP;

— Redo Log configuration
— The size of the standby redo logs must match exactly the size on the online redo logs

set heading off
set feedback off
select ‘Data Guard Redo Log Configuration’ from dual;
select ‘*********************************’ from dual;
set heading on
set feedback on

select GROUP# STANDBY_GROUP#,THREAD#,SEQUENCE#,BYTES,USED,ARCHIVED,STATUS from v$standby_log order by GROUP#,THREAD#;

select GROUP# ONLINE_GROUP#,THREAD#,SEQUENCE#,BYTES,ARCHIVED,STATUS from v$log order by GROUP#,THREAD#;

— Data Guard Parameters

set heading off
set feedback off
select ‘Data Guard Related Parameters’ from dual;
select ‘*****************************’ from dual;
set heading on
set feedback on

column name format a30
column value format a100
select NAME,VALUE from v$parameter where NAME IN (‘db_unique_name’,’cluster_database’,’dg_broker_start’,’dg_broker_config_file1′,’dg_broker_config_file2′,’fal_client’,’fal_server’,’log_archive_config’,’log_archive_trace’,’log_archive_max_processes’,’archive_lag_target’,’remote_login_password_file’,’redo_transport_user’) order by name;

— Managed Recovery State

set heading off
set feedback off
select ‘Data Guard Apply Status’ from dual;
select ‘***********************’ from dual;
set heading on
set feedback on

select systimestamp from dual;

column client_pid format a10
select PROCESS,STATUS,CLIENT_PROCESS,CLIENT_PID,THREAD#,SEQUENCE#,BLOCK#,ACTIVE_AGENTS,KNOWN_AGENTS
from v$managed_standby order by CLIENT_PROCESS,THREAD#,SEQUENCE#;

host sleep 10

select systimestamp from dual;

select PROCESS,STATUS,CLIENT_PROCESS,CLIENT_PID,THREAD#,SEQUENCE#,BLOCK#,ACTIVE_AGENTS,KNOWN_AGENTS
from v$managed_standby order by CLIENT_PROCESS,THREAD#,SEQUENCE#;

host sleep 10

select systimestamp from dual;

select PROCESS,STATUS,CLIENT_PROCESS,CLIENT_PID,THREAD#,SEQUENCE#,BLOCK#,ACTIVE_AGENTS,KNOWN_AGENTS
from v$managed_standby order by CLIENT_PROCESS,THREAD#,SEQUENCE#;

set heading off
set feedback off
select ‘Data Guard Apply Lag’ from dual;
select ‘********************’ from dual;
set heading on
set feedback on

column name format a12
column lag_time format a20
column datum_time format a20
column time_computed format a20
SELECT NAME, VALUE LAG_TIME, DATUM_TIME, TIME_COMPUTED
from V$DATAGUARD_STATS where name like ‘apply lag’;

— If there is a lag remove the comment for the select below

— SELECT * FROM V$STANDBY_EVENT_HISTOGRAM WHERE NAME = ‘apply lag’ AND COUNT > 0;

set heading off
set feedback off
select ‘Data Guard Gap Problems’ from dual;
select ‘***********************’ from dual;
set heading on
set feedback on

select * from v$archive_gap;

set heading off
set feedback off
select ‘Data Guard Errors in the Last Hour’ from dual;
select ‘**********************************’ from dual;
set heading on
set feedback on

select TIMESTAMP,SEVERITY,ERROR_CODE,MESSAGE from v$dataguard_status where timestamp > systimestamp-1/24;
spool off

Hope this helps. See you !!! André  Ontalba  – www.dbadutra.com

 Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

O post Monitoring Data Guard operation apareceu primeiro em Blog DBA Dutra.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Yesterday a new feature was released in the Autonomous Database.

The Auto Scaling feature, you can select auto scaling during provisioning or later using the Scale Up/Down button on the Oracle Cloud Infrastructure console.

When you select auto scaling Autonomous Data Warehouse can use up to three times more CPU and IO resources than specified by the number of OCPUs currently shown in the Scale Up/Down dialog. When auto scaling is enabled, if your workload requires additional CPU and IO resources the database automatically uses the resources without any manual intervention required.

To see the average number of OCPUs used during an hour you can use the “Number of OCPUs allocated” graph on the Overview page on the Autonomous Data Warehouse service console.

Enabling auto scaling does not change the concurrency and parallelism settings for the predefined services.

Hope this helps. See you !!! André  Ontalba  – www.dbadutra.com

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

O post New Feature in Autonomous Database – Auto Scaling apareceu primeiro em Blog DBA Dutra.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This article provides a run through of creating a Database System using Exadata, Bare Metal or VM on the Oracle Cloud.

Log into Oracle Cloud and click on  the “Bare Metal, VM and Exadata”  in the Database

Select the compartment you want to build the service in, then click the “Lauch DB System” button.

Enter the details of the service you want to create. We selected the VIRTUAL MACHINE type, because Bare Metal and Exadata were not available for our region.

We selected only 1 node to perform this article as we selected the Enterprise Edition Extreme Perfomance option. We will prepare another article explaining all the details of the differences from Shapes and Softwares to DB System.

Remember to select the appropriate licensing model.

Now we will generate the keys to use in our DB System.

We recommend you generate a key using the Putty Key Generator.


Click Generate and move the mouse until the key is created.

After it was created save a copy as Public Key and another as Private Key.

Now let’s put the public key to have access after the machine is created via SSH.

Click in Choose Files

Select the file saved as Public Key, in my case Public_Keys. Pub

After that, if you have not created any VNC (Virtual Network Circuit), it will be created automatically.  In my case I have already created then already came selected.

Now put information about the database and after that click Lauch DB System

Screen while creating the resource.

After an hour the environment was created, and we are asked why so much delay.

This answer is simple, building a DB System involves several components such as NETWORK, STORAGE, COMPUTE and software installation.

Ready your DB System is OK.

Now we will access the VM through SSH.

Take the IP that looks like for you in this item and open in an SSH client, remembering that we should use the private key now to make the connection.

In my case I access using mobaxterm, in Remote host I put the IP, select the username and type “OPC” and select Use the Private Key that was generated.

Ready server connected and ready to use your DB System.

Hope this helps. See you !!! André  
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Dutra DBA Blog by André Luiz Dutra Ontalba - 3w ago

This article provides a run through of creating a new Autonomous Data Warehouse service on the Oracle Cloud.

 Log into Oracle Cloud and click the “Create Instance” link.

Click on the “Create” button in the Autonomous Data Warehouse

Select the compartment you want to build the service in, then click the “Create Autonomous Database” button.

Enter the details of the service you want to create. The default sizes are 1 CPU core and 1TB of storage. Remember to select the appropriate licensing model. Click the “Create Autonomous Database” button.

Wait while the service is provisioned. You will see the state is marked as “Provisioning”.

The details screen allows you to perform some basic operations with the service, including scale up/down, manual backups and restores from backups. Click on the “Service Console” button.

You are presented with the dashboard, which will look quite empty as the service has just been provisioned. Click the “Activity” link on the left menu.

You are presented with the activity screen, which will look relatively quiet as the service has just been provisioned. Click the “Administration” link on the left menu.

The administration screen allows you to perform some basic administration of the service.

Connecting to the Autonomous Data Warehouse Service Using SQL Developer

Go to the administration screen for the service and click the “Download Client Credentials (Wallet)”.

Enter the password to protect the credentials store.

Open SQL Developer and create a new connection. Use the username and password specified when you provisioned the service. Use a connection type of “Cloud Wallet” and enter the zip file location. You can now click the “Test” or “Connect” button.

Now all ready to use

Hope this helps. See you !!!
André 
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Dutra DBA Blog by André Luiz Dutra Ontalba - 3w ago

This article provides a run through of creating a new Autonomous Database service on the Oracle Cloud.

 Log into Oracle Cloud and click the “Create Instance” link.

Click on the “Create” button in the Autonomous Transaction Processing

Select the compartment you want to build the service in, then click the “Create Autonomous Database” button.

Enter the details of the service you want to create. The default sizes are 1 CPU core and 1TB of storage. Remember to select the appropriate licensing model. Click the “Create Autonomous Database” button.

Wait while the service is provisioned. You will see the state is marked as “Provisioning”.

The details screen allows you to perform some basic operations with the service, including scale up/down, manual backups and restores from backups. Click on the “Service Console” button.

You are presented with the dashboard, which will look quite empty as the service has just been provisioned. Click the “Activity” link on the left menu.

You are presented with the activity screen, which will look relatively quiet as the service has just been provisioned. Click the “Administration” link on the left menu.

The administration screen allows you to perform some basic administration of the service.

Connecting to the Autonomous Database Service Using SQL Developer

Go to the administration screen for the service and click the “Download Client Credentials (Wallet)”.

Enter the password to protect the credentials store.

Open SQL Developer and create a new connection. Use the username and password specified when you provisioned the service. Use a connection type of “Cloud Wallet” and enter the zip file location. You can now click the “Test” or “Connect” button.

Now all ready to use

Hope this helps. See you !!!
André  Ontalba 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Dutra DBA Blog by André Luiz Dutra Ontalba - 1M ago

Let’s start our journey nthe Oracle cloud

First we must log into the console the cloud.

My case I created my account in Frankfurt because I am using the services here in Europe.

You will be directed to the login of your Tenancy.

Now that you are logged in, let’s start the creation of the Compute Instance.

Click Compute and then select Instances.

Click in Create Instance

First of a name to your Compute Instance, I put Database_19c.

Then select in which Availability domain you want to create your Compute Instance I chose the AD2.

After that we choose the type of Instance we want or VM (Virtual Machine) or Bare Metal Machine (Compute Instance dedicated).  I chose the VM for being more aware in terms of using credits and not now relying on a dedicated compute instance. I will write an article further forward giving an Overview of the main principles and basic understandings to work with OCI.

Well after doing all this we select the type of shape, for this example I will use the default VM. Standard 2.1 and then I explain in detail in the other article the differences of the shapes.

Now we will generate the keys to use in our compute instance

I recommend you generate a key using the Putty Key Generator.

Click Generate and move the mouse until the key is created

After it was created save a copy as Public Key and another as Private Key.

Now let’s put the public key to have access after the machine is created via SSH.

Click in Choose Files

Seal the file saved as Public Key, in my case Public_Keys. pub

After that, if you have not created any VNC (Virtual Network Circuit), it will be created automatically.  In my case I have already created then already came selected.

Now click Create and wait a few minutes.

Screen while creating the resource.

Ready your compute instance is OK.

Now we will access the VM through SSH.

Take the IP that looks like for you in this item and open in an SSH client, remembering that we should use the private key now to make the connection.

In my case I access using mobaxterm, in Remote host I put the IP, select the username and type “OPC” and select Use the Private Key that was generated.

Ready server connected and ready to install Oracle Database.

I hope I helped and new articles about Cloud are on the way.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Environments

  • You have two servers (VMs or physical) with an operating system and Oracle installed on them. My environment I’ve used Oracle Linux 7.6 and Oracle Database 19c.
  • The primary server (duts-dg1) has a running instance.
  • The standby server (duts-dg2) has a software only installation.
  • There is nothing blocking communication between the machines over the listener ports.

Primary Server Setup

Logging

Check that the primary database is in archivelog mode.

SELECT log_mode FROM v$database;

LOG_MODE

————

NOARCHIVELOG

SQL>

If it is noarchivelog mode, switch is to archivelog mode.

SHUTDOWN IMMEDIATE;

STARTUP MOUNT;

ALTER DATABASE ARCHIVELOG;

ALTER DATABASE OPEN;

Enabled forced logging by issuing the following command.

ALTER DATABASE FORCE LOGGING;

— Make sure at least one logfile is present.

ALTER SYSTEM SWITCH LOGFILE;

Create standby redo logs on the primary database (in case of switchovers). The standby redo logs should be at least as big as the largest online redo log and there should be one extra group per thread compared the online redo logs. In my case, the following standby redo logs must be created on both servers.

— If Oracle Managed Files (OMF) is not used.

ALTER DATABASE ADD STANDBY LOGFILE (‘/u01/data/duts/std_redo01.log’) SIZE 100M;

ALTER DATABASE ADD STANDBY LOGFILE (‘/u01/data/duts/std_redo02.log’) SIZE 100M;

ALTER DATABASE ADD STANDBY LOGFILE (‘/u01/data/duts/std_redo03.log’) SIZE 100M;

ALTER DATABASE ADD STANDBY LOGFILE (‘/u01/data/duts/std_redo04.log’) SIZE 100M;

— If Oracle Managed Files (OMF) is used.

ALTER DATABASE ADD STANDBY LOGFILE SIZE 100M;

ALTER DATABASE ADD STANDBY LOGFILE SIZE 100M;

ALTER DATABASE ADD STANDBY LOGFILE SIZE 100M;

ALTER DATABASE ADD STANDBY LOGFILE SIZE 100M;

If you want to user flashback database, enable it on the primary now, so it will be enabled on the standby also. I always use it in my environments.

ALTER DATABASE FLASHBACK ON;

Initialization Parameters

Check the setting for the DB_NAME and DB_UNIQUE_NAME parameters. In this case they are both set to “duts” on the primary database.

SQL> show parameter db_name

NAME                                TYPE       VALUE

———————————— ———– ——————————

db_name                             string     duts

SQL> show parameter db_unique_name

NAME                                TYPE       VALUE

———————————— ———– ——————————

db_unique_name                      string     duts

SQL>

The DB_NAME of the standby database will be the same as that of the primary, but it must have a different DB_UNIQUE_NAME value. For this example, the standby database will have the value “duts_stby”.

Make sure the STANDBY_FILE_MANAGEMENT parameter is set.

ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

Service Setup

Entries for the primary and standby databases are needed in the “$ORACLE_HOME/network/admin/tnsnames.ora” files on both servers.

You can create these using the Network Configuration Utility (netca) or manually.

The following entries were used during this setup. Notice the use of the SID, rather than the SERVICE_NAME in the entries. This is important as the broker will need to connect to the databases when they are down, so the services will not be present.

duts =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = duts-dg1)(PORT = 1521))

    )

    (CONNECT_DATA =

      (SID = duts)

    )

  )

duts_stby =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = duts-dg2)(PORT = 1521))

    )

    (CONNECT_DATA =

      (SID = duts)

    )

  )

The “$ORACLE_HOME/network/admin/listener.ora” file on the primary server contains the following configuration.

LISTENER =

  (DESCRIPTION_LIST =

    (DESCRIPTION =

      (ADDRESS = (PROTOCOL = TCP)(HOST = duts-dg1)(PORT = 1521))

      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))

    )

  )

SID_LIST_LISTENER =

  (SID_LIST =

    (SID_DESC =

      (GLOBAL_DBNAME = duts_DGMGRL)

      (ORACLE_HOME = /u01/app/oracle/product/19.0.0/db_1)

      (SID_NAME = duts)

    )

  )

ADR_BASE_LISTENER = /u01/app/oracle

The “$ORACLE_HOME/network/admin/listener.ora” file on the standby server contains the following configuration.

Since the broker will need to connect to the database when it’s down, we can’t rely on auto-registration with the listener, hence the explicit entry for the database.

LISTENER =

  (DESCRIPTION_LIST =

    (DESCRIPTION =

      (ADDRESS = (PROTOCOL = TCP)(HOST = duts-dg2)(PORT = 1521))

      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))

    )

  )

SID_LIST_LISTENER =

  (SID_LIST =

    (SID_DESC =

      (GLOBAL_DBNAME = duts_stby_DGMGRL)

      (ORACLE_HOME = /u01/app/oracle/product/19.0.0/db_1)

      (SID_NAME = duts)

    )

  )

ADR_BASE_LISTENER = /u01/app/oracle

Once the listener.ora changes are in place, restart the listener on both servers.

lsnrctl stop

lsnrctl start

Standby Server Setup

Prepare for Duplicate

Create a parameter file for the standby database called “/tmp/initduts_stby.ora” with the following contents.

*.db_name=’duts’

Create the necessary directories on the standby server.

mkdir -p /u02/data/duts/pdbseed

mkdir -p /u02/data/duts/pdb1

mkdir -p /u02/app/oracle/fast_recovery_area/duts

mkdir -p /u02/app/oracle/admin/duts/adump

Create a password file, with the SYS password matching that of the primary database.

$ orapwd file=/u01/app/oracle/product/19.0.0/db_1/dbs/orapwduts password=oracle entries=10

Create Standby Using DUPLICATE

Start the auxiliary instance on the standby server by starting it using the temporary “init.ora” file.

$ export ORACLE_SID=duts

$ sqlplus / as sysdba

SQL> STARTUP NOMOUNT PFILE=’/tmp/initduts_stby.ora’;

Connect to RMAN, specifying a full connect string for both the TARGET and AUXILIARY instances. Do not attempt to use OS authentication.

$ rman TARGET sys/oracle@duts AUXILIARY sys/oracle@duts_stby

Now issue the following DUPLICATE command.

DUPLICATE TARGET DATABASE

  FOR STANDBY

  FROM ACTIVE DATABASE

  DORECOVER

  SPFILE

    SET db_unique_name=’duts_stby’ COMMENT ‘Is standby 19c’

  NOFILENAMECHECK;

If you need to convert file locations, or alter any initialization parameters, you can do this during the DUPLICATE using the SET command.

DUPLICATE TARGET DATABASE

  FOR STANDBY

  FROM ACTIVE DATABASE

  DORECOVER

  SPFILE

    SET db_unique_name=’duts_stby’ COMMENT ‘Is standby 19c’

    SET db_file_name_convert=’/u01/data/duts/’,’/u02/data/duts/’

    SET log_file_name_convert=’/u01/data/duts/’,’/u02/data/duts/’

    SET job_queue_processes=’0′

  NOFILENAMECHECK;

A brief explanation of the individual clauses is shown below.

  • FOR STANDBY: This tells the DUPLICATE command is to be used for a standby, so it will not force a DBID change.

  • FROM ACTIVE DATABASE: The DUPLICATE will be created directly from the source datafiles, without an additional backup step.

  • DORECOVER: The DUPLICATE will include the recovery step, bringing the standby up to the current point in time.

  • SPFILE: Allows us to reset values in the spfile when it is copied from the source server.

  • NOFILENAMECHECK: Destination file locations are not checked.

Once the command is complete, we can start using the broker.

Enable Broker

At this point we have a primary database and a standby database, so now we need to start using the Data Guard Broker to manage them. Connect to both databases (primary and standby) and issue the following command.

ALTER SYSTEM SET dg_broker_start=true;

On the primary server, issue the following command to register the primary server with the broker.

$ dgmgrl sys/oracle@duts

DGMGRL for Linux: Release 19.0.0.0.0 – Production on Tue May 11 14:39:33 2019

Version 19.2.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type “help” for information.

Connected as SYSDBA.

DGMGRL> CREATE CONFIGURATION dg_config AS PRIMARY DATABASE IS duts CONNECT IDENTIFIER IS duts;

Configuration “dg_config” created with primary database “duts”

DGMGRL>

Now add the standby database.

DGMGRL> ADD DATABASE duts_stby AS CONNECT IDENTIFIER IS duts_stby MAINTAINED AS PHYSICAL;

Database “duts_stby” added

DGMGRL>

Now we enable the new configuration.

DGMGRL> ENABLE CONFIGURATION;

Enabled.

DGMGRL>

The following commands show how to check the configuration and status of the databases from the broker.

DGMGRL> SHOW CONFIGURATION;

Configuration – dg_config

  Protection Mode: MaxPerformance

  Members:

  duts      – Primary database

    duts_stby – Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:

SUCCESS   (status updated 26 seconds ago)

DGMGRL> SHOW DATABASE duts;

Database – duts

  Role:               PRIMARY

  Intended State:     TRANSPORT-ON

  Instance(s):

    duts

Database Status:

SUCCESS

DGMGRL> SHOW DATABASE duts_stby;

Database – duts_stby

  Role:               PHYSICAL STANDBY

  Intended State:     APPLY-ON

  Transport Lag:      0 seconds (computed 1 second ago)

  Apply Lag:          0 seconds (computed 1 second ago)

  Average Apply Rate: 5.00 KByte/s

  Real Time Query:    OFF

  Instance(s):

    duts

Database Status:

SUCCESS

DGMGRL>

Database Switchover

A database can be in one of two mutually exclusive modes (primary or standby). These roles can be altered at runtime without loss of data or resetting of redo logs. This process is known as a Switchover and can be performed using the following commands. Connect to the primary database (duts) and switchover to the standby database (duts_stby).

$ dgmgrl sys/oracle@duts

DGMGRL for Linux: Release 19.0.0.0.0 – Production on Tue May 11 14:55:33 2019

Version 19.2.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type “help” for information.

Connected as SYSDBA.

DGMGRL> SWITCHOVER TO duts_stby;

Performing switchover NOW, please wait…

Operation requires a connection to instance “duts” on database “duts_stby”

Connecting to instance “duts”…

Connected as SYSDBA.

New primary database “duts_stby” is opening…

Operation requires start up of instance “duts” on database “duts”

Starting instance “duts”…

ORACLE instance started.

Database mounted.

Switchover succeeded, new primary is “duts_stby”

DGMGRL>

Let’s switch back to the original primary. Connect to the new primary (duts_stby) and switchover to the new standby database (duts).

$ dgmgrl sys/oracle@duts_stby

DGMGRL for Linux: Release 19.0.0.0.0 – Production on Tue May 11 14:57:20 2019

Version 19.2.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type “help” for information.

Connected as SYSDBA.

DGMGRL> SWITCHOVER TO duts;

Performing switchover NOW, please wait…

Operation requires a connection to instance “duts” on database “duts”

Connecting to instance “duts”…

Connected as SYSDBA.

New primary database “duts” is opening…

Operation requires start up of instance “duts” on database “duts_stby”

Starting instance “duts”…

ORACLE instance started.

Database mounted.

Switchover succeeded, new primary is “duts”

DGMGRL>

Database Failover

If the primary database is not available the standby database can be activated as a primary database using the following statements. Connect to the standby database (duts_stby) and failover.

$ dgmgrl sys/oracle@duts_stby

DGMGRL for Linux: Release 19.0.0.0.0 – Production on Tue May 11 15:00:20 2019

Version 19.2.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type “help” for information.

Connected as SYSDBA.

DGMGRL> FAILOVER TO duts_stby;

Performing failover NOW, please wait…

Failover succeeded, new primary is “duts_stby”

DGMGRL>

Since the standby database is now the primary database it should be backed up immediately.

The original primary database can now be configured as a standby. If flashback database was enabled on the primary database, then this can be done relatively easily with the following command.

DGMGRL> REINSTATE DATABASE duts;

Reinstating database “duts”, please wait…

Operation requires shut down of instance “duts” on database “duts”

Shutting down instance “duts”…

ORACLE instance shut down.

Operation requires start up of instance “duts” on database “duts”

Starting instance “duts”…

ORACLE instance started.

Database mounted.

Continuing to reinstate database “duts” …

Reinstatement of database “duts” succeeded

DGMGRL>

If flashback database is not enabled, you would have to manually recreate duts as a standby. The basic process is the reverse of what you did previously.

# 1) Cleanup the old instance.

sqlplus / as sysdba <<EOF

SHUTDOWN IMMEDIATE;

EXIT;

EOF

rm -Rf /u01/data/duts/*

rm -Rf /u01/app/oracle/fast_recovery_area/duts

rm -Rf /u01/app/oracle/fast_recovery_area/duts_stby

rm -Rf /u01/app/oracle/admin/duts

mkdir -p /u01/app/oracle/fast_recovery_area/duts

mkdir -p /u01/app/oracle/admin/duts/adump

mkdir -p /u01/data/duts/pdbseed

mkdir -p /u01/data/duts/pdb1

rm $ORACLE_HOME/dbs/spfileduts.ora

export ORACLE_SID=duts

sqlplus / as sysdba <<EOF

STARTUP NOMOUNT PFILE=’/tmp/initduts_stby.ora’;

EXIT;

EOF

# 2) Connect to RMAN.

$ rman TARGET sys/oracle@duts_stby AUXILIARY sys/oracle@duts

# 3) Duplicate the database.

DUPLICATE TARGET DATABASE

  FOR STANDBY

  FROM ACTIVE DATABASE

  DORECOVER

  SPFILE

    SET db_unique_name=’duts’ COMMENT ‘Is standby 19c’

    SET db_file_name_convert=’/u02/data/duts/’,’/u01/data/duts/’

    SET log_file_name_convert=’/u02/data/duts/’,’/u01/data/duts/’

    SET job_queue_processes=’0′

  NOFILENAMECHECK;

# 4) Connect to DGMDRL on the current primary.

$ dgmgrl sys/oracle@duts_stby

# 5) Enable the new standby.

DGMGRL> ENABLE DATABASE duts;

Flashback Database

It was already mentioned in the previous section, but it is worth drawing your attention to Flashback Database once more. Although a switchover/switchback is safe for both the primary and standby database, a failover renders the original primary database useless for converting to a standby database. If flashback database is not enabled, the original primary must be scrapped and recreated as a standby database.

An alternative is to enable flashback database on the primary (and the standby if desired) so in the event of a failover, the primary can be flashed back to the time before the failover and quickly converted to a standby database, as shown above.

Creation of application services

To facilitate the administration of client connections, and to make SWITCHOVER operations more transparent for clients, it is recommended to create database SERVICES.

Example, service definition « DUTSS » :

begin DBMS_SERVICE.CREATE_SERVICE ( service_name => ‘DUTSS’,

                                      network_name => ‘ DUTSS ‘,

                                      failover_method => ‘BASIC’,

                                      failover_type => ‘SELECT’,

                                      failover_retries => 180,

                                      failover_delay => 1);

end;

/

In this case, there are 180 retries and a delay of 1 second (so basically 3 minutes before switching).  This should be adapted depending on your needs and requirements.

These are the services that should be used by client application connections.

Creating the Startup trigger

To manage the automatic start of the services, in particular in the event of a role transition, the following TRIGGER must be created (example for the DUTS service). The trigger must be created under SYS:

Connect SYS as SYSDBA

CREATE OR REPLACE TRIGGER manage_app_services

   AFTER STARTUP

   ON DATABASE

DECLARE

   role   VARCHAR (30);

BEGIN

   SELECT   DATABASE_ROLE INTO role FROM V$DATABASE;

   IF role = ‘PRIMARY’

  ..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

E ai galera !!!

O Oracle Database 19c está disponível para on premises e um dos principais recursos é o novo utilitário de atualização automática.

O que é o AutoUpgrade?

O utilitário AutoUpgrade é uma nova ferramenta que permite atualizar seus bancos de dados de maneira autônoma.

A idéia da ferramenta é executar as pré-validações em vários bancos de dados, corrigir possíveis problemas, definir um ponto de restauração caso algo dê errado e então, atualizar seus bancos de dados.

E não podendo esquecer, fazer o pós-upgrade, recompilação e ajuste de fuso horário.

A única coisa que você precisa fornecer é um arquivo de configuração em formato de texto.

Quais versões de banco de dados são suportadas pelo AutoUpgrade?

De acordo com o MOS Note: 2485457.1 – AutoUpgrade Tool somente as versões abaixo são suportadas:

  • Oracle Database 19c (19.3)
  • Oracle Database 18c (18.5)
  • Oracle Database 12c Release 2 (12.2 + DBJAN2019RU)

De onde você faz download do AutoUpgrade?

Você obtém quando instala o Oracle Database 19c (19.3) ou você faz o download da versão mais recente do MOS Note: 2485457.1 – AutoUpgrade Tool

Onde encontrar a documentação do AutoUpgrade?

Está tudo aqui incluído no Guia de atualização do Oracle Database 19c:

Using AutoUpgrade for Oracle Database Upgrades

Espero ter ajudado !!!

Logo teremos mais artigos sobre o Oracle 19c.

Oracle Database 19c on premises is available and one of the new features is the AutoUpgrade utility.

What is the AutoUpgrade?

The Oracle Database AutoUpgrade utility is a new tool which allows you to upgrade your databases in an unattended way.

The idea of the tool is to run the prechecks against multiple databases, fix of the potential issues, set a restore point in case something goes wrong and then upgrade your databases.

Of course, do the postupgrade, recompilation and time zone adjustment.

The only thing you need to provide is a config file in text format.

Which database releases are supported?

According to the MOS Note: 2485457.1 – AutoUpgrade Tool only the versions below are supported:

Oracle Database 19.3.0 and newer

Oracle Database 18.5.0 and newer

Oracle Database 12.2.0.1 with Jan 2019 RU and newer

Where do you get the AutoUpgrade?

You get it when you install Oracle Database 19c (19.3) or you download the most recent version from MOS Note: 2485457.1 – AutoUpgrade Tool:

Where do you find the AutoUpgrade documentation?

It is all here included in the Oracle Database 19c Upgrade Guide:

Using AutoUpgrade for Oracle Database Upgrades

Hope this helps !!!

Soon we will have more articles on the Oracle 19c.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Fala pessoal !!!!

Mais um artigo agora um pouco menos técnico, falando um pouco da carreira de DBA, como será daqui alguns anos.


Bem venho conversando com diversas pessoas que sempre vem dizendo a carreira de DBA está acabando e será um caminho sem volta.


Bem depende, como em toda evolução assim como foi na Revolução Industrial onde grandes transformações econômicas e sociais ocorreram, sugiram também diversas profissões que até então não existiam.

 Agora estamos indo em rumo a Indústria 4.0 ou Quarta revolução digital, onde diversos itens estão sendo automatizado e mudando cada vez nossa forma de viver criando cada vez mais depenência da tecnologia.

Por isso minha resposta é depende como vimos nos últimos lançamentos da Oracle por exemplo, com o Oracle 18c e 19c as coisas estão sendo automatizadas e se tornando autonomas.

Isso força os DBA’s que sempre trabalharam com banco de dados a ser atualizar e se antenar com as novas tecnologias.

Minha visão é que em 5 a 10 anos teremos uma grande transformação na carreira de DBA, claro que teremos legados para manter por muitos e muitos anos vemos como exemplo nossos queridos Mainframes.

Mas isso não quer dizer que as empresas que estão cada vez mais otimizando seus recursos e investimentos e já estão em busca de soluções em cloud entre outros afim de reduzir custos.

Bem diante disso, vejo algumas trilhas que serão bem exploradas pelos atuais DBA’s e futuros DBA’s.

Engenheiro de Dados

Cientista de Dados

Arquiteto / Engenheiro de Cloud

Acredito que cada DBA em sua especialidade vai acabar encontrando uma trilha que leve a sua atualização, mas não podemos mais pensar que será como a 15 anos atrás, mas sim um novo desafio a grande revolução que está ocorrendo.

Lembrando que está é uma opinião pessoal minha, baseado na minha experiência de mercado e também de conversas com diversas pessoas.

Até a próxima pessoal

Hey guys !!!!

Another article now a little less technical, talking a bit about the career of DBA, as it will be in a few years.

Well I’ve been talking to several people who is always telling DBA career is ending and will be no going back.

Well it depends, as in every evolution as it was in the Industrial Revolution where great economic and social transformations took place, also suggest several professions that had not existed until then.

Now we are heading towards the 4.0 or Fourth Digital Revolution where various items are being automated and ever changing our way of living creating more and more dependence on technology.

So my answer is depends on how we see the latest releases of Oracle for example, with Oracle 18c and 19c things are being automated and becoming autonomous.

This forces the DBA’s who have always worked with the database to update and antennary with new technologies.

My view is that in 5-10 years we will have a great transformation in the DBA career, of course we will have to keep legacy for many years we see as our example Mainframes.

But that does not mean that companies are increasingly optimizing their resources and investments and are already looking for solutions in cloud and others in order to reduce costs.

Well ahead of that, I see some tracks that will be well explored by current DBA’s and future DBA’s.

Data Engineer

Data Scientist

Architect / Cloud Engineer

I believe that every DBA in their specialty will end up finding a trail that leads to your upgrade, but we can not think it will be like 15 years ago, but a new challenge the great revolution that is taking place.

Remembering that this is my personal opinion, based on my experience of the market and also of conversations with several people.

Thanks

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview