
A walkthrough the Transient Logical Standby
This blog post is created because I got some questions in the last weeks about upgrading using Data Guard. For migration to the cloud we now have Oracle Move and ZDM ( https://www.oracle.com/database/technologies/cloud-migration.html and https://www.oracle.com/database/technologies/rac/zdm.html) Check them out as well!
Data Guard is meant as a Disaster Recovery (DR) solution, but to leverage the usage, one of the things it is often used for is also migrations.
My colleague Mike Dietrich ( https://mikedietrichde.com ) can tell you all the differences between upgrade and migration, but for short. When you stay on the same version and move the system, you migrate. When you want to go to a higher version, you upgrade.
That said, wouldn’t it be cool if Data Guard could be used for both? I think yes, and extra cool, it is even possible. Of course,we cannot use the classic physical standby for this as the redo stream is different between versions, but the solution is to use a Transient Logical Standby database (TLS). You can also find this information in the Oracle Documentation of course.
How does this work? Well, a drawing says more than 1000 words.

The proof of the pudding if of course the eating and there we go.
- Create the guaranteed restore point
- Build the logminer dictionary
- Convert the physical standby to a logical standby
- Upgrade the logical standby
- Start the apply again and let it recover
- Switchover the primary database to the Logical standby, at that point, you are upgraded already!
- The old primary is now a logical standby, so it needs to be flashed back
- Then converted to a physical standby.
- Due to that conversion, the redo (with the upgrade info in) is sent to the old primary, the new physical standby
- Finally,a switchover again to put the primary back in place
These are ten easy steps. So let’s do the flow with an example. In this example we will upgrade a 11.2.0.4 database to 19c.
REMARK: after doing this, it is highly recommended to take another step to convert the non-cdb to a cdb in 19c.
Create the guaranteed restore point
To do so, we need the primary database to be in archivelog mode and have flashback turned on. If this is your production database, this should be the case. If not, I would highly recommend you to enable flashback on both primary and standby database anyhow.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[oracle@oelvm1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Wed May 22 09:25:05 2019 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning option SQL> select flashback_on from v$database; FLASHBACK_ON ------------------ YES SQL> |
Then on the primary create the guaranteed restore point:
1 2 3 4 5 |
SQL> create restore point before_upgrade guarantee flashback database; Restore point created. SQL> |
Build the logminer dictionary
Before we can convert the database to a logical standby, we must be sure that it will work.
First we check the schema’s which will be skipped in the logical standby
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
SQL> SELECT OWNER FROM DBA_LOGSTDBY_SKIP WHERE STATEMENT_OPT = 'INTERNAL SCHEMA' order by owner; OWNER ------------------------------ ANONYMOUS APEX_030200 APPQOSSYS CTXSYS DBSNMP DIP EXFSYS MDSYS ORACLE_OCM ORDDATA ORDPLUGINS ORDSYS OUTLN SI_INFORMTN_SCHEMA SYS SYSTEM WMSYS XDB XS$NULL 19 rows selected. SQL> |
Then we verify if all the objects in our database are supported to go to a logical standby.
1 2 3 4 5 |
SQL> SELECT DISTINCT OWNER,TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED ORDER BY OWNER,TABLE_NAME; no rows selected SQL> |
This is a good case, we don’t have anything unsupported so we can convert this physical standby to a logical standby.
Convert the physical standby to a logical standby
We do not support the Logical Standby Database in the broker. So that also implies that we will have to stop the broker for this operation.
First step is to disable the broker configuration.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[oracle@oelvm1 ~]$ dgmgrl DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect sys@dgdemovm1 Password: Connected. DGMGRL> disable configuration Disabled. DGMGRL> exit [oracle@oelvm1 ~]$ |
And we stop the redo apply on the standby database.
1 2 3 4 5 |
SQL> alter database recover managed standby database cancel; Database altered. SQL> |
Then, on the primary database, we build the logminer dictionary
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[oracle@oelvm1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Wed May 22 09:30:00 2019 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning option SQL> set serveroutput on SQL> execute dbms_logstdby.build; PL/SQL procedure successfully completed. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning option [oracle@oelvm1 ~]$ |
Also keep an eye on the alert.log to make sure all goes well.
On the standby database we convert it to a logical standby
1 2 3 4 5 |
SQL> alter database recover to logical standby keep identity; Database altered. SQL> |
Of course, trust but verify!
1 2 3 4 5 6 7 8 |
SQL> set pages 999 lines 200 SQL> select INSTANCE_NAME,HOST_NAME,database_role from v$instance, v$database; INSTANCE_NAME HOST_NAME DATABASE_ROLE ---------------- ---------------------------------------------------------------- ---------------- dgdemovm2 oelvm2.localdomain LOGICAL STANDBY SQL> |
Then open the logical standby database
1 2 3 4 5 |
SQL> alter database open; Database altered. SQL> |
Make sure deletion of foreign archived logs at the logical standby is not happening.
You can find more info on how to do that in the Oracle Documentation in Step 2 of this table https://docs.oracle.com/en/database/oracle/oracle-database/19/sbydb/using-sql-apply-to-perform-rolling-upgrade.html#GUID-C5DF6148-C1E9-4ADF-A975-AC95FC64E0C4
And finally start logical standby apply
1 2 3 4 5 |
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE; Database altered. SQL> |
Upgrade the logical standby
For this upgrade best practices, I would like to point you to Mike’s website (this is a good starting point https://mikedietrichde.com/2019/04/29/the-new-autoupgrade-utility-in-oracle-19c/ ). It is full of good advice to perform a successful upgrade. Focus on upgrading itself will bring us a bit too far in this blogpost, so I just took the quick and easy way with DBUA to keep the focus on the Data Guard.

Select the logical standby database and specify the SYSDBA credentials.

I have an old apex and itis telling me about it, but for this purpose this warning is less important. For production upgrades, you should care and make sure you fix what is necessary.

I did leave this default.

Better safe than sorry, DBUA will create a guaranteed restore point in case the upgrade would fail. I like this! Only take into account that you need sufficient pace in the Fast Recovery Area (FRA).
For real life situations it is of course much better to select that you have your own backup and recovery strategy as DBUA sometimes has interesting behaviour. This is not important for this blog post though.

In my demovm’s I already have a 19c listener running, so I prefer that one.

It’s a demo, so for me there was no need for monitoring.

After a nice summary we start the upgrade.

Waiting waiting waiting …

All done! We have a 19c logical standby database.
Start the apply again and let it recover
One of the easy things in DBUA is, it also adapts your oratab for this database.
1 2 3 |
[oracle@oelvm2 ~]$ grep dgdemovm2 /etc/oratab dgdemovm2:/u01/app/oracle/product/19.3.0/dbhome_1:N [oracle@oelvm2 ~]$ |
That said, when you log on to your database, you might need a new shell or at least reset your oraenv.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[oracle@oelvm2 ~]$ . oraenv ORACLE_SID = [dbhome193] ? dgdemovm2 The Oracle base remains unchanged with value /u01/app/oracle [oracle@oelvm2 ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Wed May 22 10:35:48 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> set pages 999 lines 200 SQL> select INSTANCE_NAME,HOST_NAME,database_role from v$instance, v$database; INSTANCE_NAME HOST_NAME DATABASE_ROLE ---------------- ---------------------------------------------------------------- ---------------- dgdemovm2 oelvm2.localdomain LOGICAL STANDBY SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE; Database altered. SQL> |
Monitor the lag and make sure it is in sync before continuing
1 2 3 4 5 6 7 8 9 10 11 |
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS'; Session altered. SQL> SELECT SYSDATE, APPLIED_TIME FROM V$LOGSTDBY_PROGRESS; SYSDATE APPLIED_TIME ------------------ ------------------ 23-MAY-19 06:48:46 23-MAY-19 06:48:41 SQL> |
Roleswitch
Switchover the primary database to the Logical standby, at that point, you are upgraded already!
Perform the switchover on the primary. As we disabled the broker, we need to do this manually here:
1 2 3 4 5 |
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY; Database altered. SQL> |
when all is ok, on standby db
1 2 3 4 5 6 7 |
SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE; SWITCHOVER_STATUS -------------------- TO PRIMARY SQL> |
And when it lists “To Primary” you can complete the switchover
1 2 3 4 5 |
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY; Database altered. SQL> |
And the role switch has been performed.
1 2 3 4 5 6 7 |
SQL> select database_role,host_name from v$database,v$instance; DATABASE_ROLE HOST_NAME ---------------- ---------------------------------------------------------------- PRIMARY oelvm2.localdomain SQL> |
The old primary is now a logical standby, so it needs to be flashed back
As said, the old primary is now a logical standby
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[oracle@oelvm1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Thu May 23 06:55:31 2019 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning option SQL> select database_role from v$database; DATABASE_ROLE ---------------- LOGICAL STANDBY SQL> |
Before we can convert it to a physical standby, we need to flash it back
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 1085640704 bytes Fixed Size 2252424 bytes Variable Size 419430776 bytes Database Buffers 654311424 bytes Redo Buffers 9646080 bytes Database mounted. SQL> flashback database to restore point before_upgrade; Flashback complete. SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> |
As we do not have to run the upgrade on this system, but receive it via redo, we need to the steps ourselves.
First edit oratab to the 19c environment
1 2 3 4 |
[oracle@oelvm1 ~]$ grep dgdemo /etc/oratab #dgdemovm1:/u01/app/oracle/product/11.2.0/dbhome_1:N dgdemovm1:/u01/app/oracle/product/19.3.0/dbhome_1:N [oracle@oelvm1 ~]$ |
And set it of course
1 2 3 4 |
[oracle@oelvm1 ~]$ . oraenv ORACLE_SID = [dgdemovm1] ? dgdemovm1 The Oracle base remains unchanged with value /u01/app/oracle [oracle@oelvm1 ~]$ |
Copy the necessary files to the NEW 19c oracle home
1 2 3 |
[oracle@oelvm1 ~]$ cp /u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwdgdemovm1 $ORACLE_HOME/dbs [oracle@oelvm1 ~]$ cp /u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfiledgdemovm1.ora $ORACLE_HOME/dbs [oracle@oelvm1 ~]$ |
Then start the database from the New 19c Oracle home in mount mode
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[oracle@oelvm1 ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 23 06:59:10 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to an idle instance. SQL> startup mount ORACLE instance started. Total System Global Area 1090518112 bytes Fixed Size 8895584 bytes Variable Size 419430400 bytes Database Buffers 654311424 bytes Redo Buffers 7880704 bytes Database mounted. SQL> |
Then converted to a physical standby.
Convert the database to a physical standby again and stop the database. We need to reboot it anyway
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
SQL> alter database convert to physical standby; Database altered. SQL> SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> |
Due to that conversion, the redo (with the upgrade info in it) is sent to the old primary, the new physical standby
So,startup the instance again and enable managed recovery. Managed recovery waits until the new incarnation branch from the primary is registered before it starts applying redo.
So monitor the logfiles very carefully to make sure the incarnation has been updated.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
SQL> startup mount ORACLE instance started. Total System Global Area 1090518112 bytes Fixed Size 8895584 bytes Variable Size 301989888 bytes Database Buffers 771751936 bytes Redo Buffers 7880704 bytes Database mounted. SQL> alter database recover managed standby database using current logfile disconnect from session; Database altered. SQL> |
Finally,a switchover is done again to put the primary back in place
On the primary database (the one we switched over to earlier) we ask Data Guard if it is safe to switchover
1 2 3 4 5 6 7 8 |
SQL> alter database switchover to dgdemovm1 verify; alter database switchover to dgdemovm1 verify * ERROR at line 1: ORA-16475: succeeded with warnings, check alert log for more details SQL> |
That’s why you need to have a continuous tail -f on the alert.log.
1 2 3 4 5 6 |
… 2019-05-23T07:10:39.957433-04:00 SWITCHOVER VERIFY WARNING: The standby redo logfiles are not clear. It takes time to clear standby redo logfiles. This may slow down switchover process. SWITCHOVER VERIFY: Send VERIFY request to switchover target DGDEMOVM1 ORA-16475 signalled during: alter database switchover to dgdemovm1 verify... … |
For me this is ok, the switchover just will take a little longer in this case that is not an issue. For production systems you might want to fix this upfront. One of the ways to fix it is use the log_file_name_convert parameter. Take into account, that this setup is a very standard default setup. So you might not hit this issue.
There we go
1 2 3 4 5 |
SQL> alter database switchover to dgdemovm1; Database altered. SQL> |
On the new primary (again dgdemovm1 in this case) we need to open it and then check if the role transition succeeded
1 2 3 4 5 6 7 8 9 10 11 12 13 |
SQL> alter database open; Database altered. SQL> SQL> set pages 999 lines 200 SQL> select host_name, database_role from v$database,v$instance; HOST_NAME DATABASE_ROLE ---------------------------------------------------------------- ---------------- oelvm1.localdomain PRIMARY SQL> |
Of course, the new standby (dgdemovm2 in this case) should be a physical standby again and we can start redo apply again
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
SQL> startup mount ORACLE instance started. Total System Global Area 1090518112 bytes Fixed Size 8895584 bytes Variable Size 301989888 bytes Database Buffers 771751936 bytes Redo Buffers 7880704 bytes Database mounted. SQL> select host_name, database_role from v$database,v$instance; HOST_NAME DATABASE_ROLE ---------------------------------------------------------------- ---------------- oelvm2.localdomain PHYSICAL STANDBY SQL> SQL> alter database recover managed standby database using current logfile disconnect from session; Database altered. SQL> |
At this point, the upgrade is happening on this physical standby database. We get the information via the redo stream and is applied to this standby database which will eventually result in an upgraded database the moment it has catched up.
Enable the broker
One last step in this list of steps, we highly recommend to use the Data Guard Broker.
Positive: It detects the broker files and I don’thave to manually enable it again.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
[oracle@oelvm1 ~]$ dgmgrl DGMGRL for Linux: Release 19.0.0.0.0 - Production on Thu May 23 07:15:19 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect sys@dgdemovm1 Password: Connected to "dgdemovm1" Connected as SYSDBA. DGMGRL> show configuration Configuration - dgdemo Protection Mode: MaxPerformance Members: dgdemovm1 - Primary database dgdemovm2 - Physical standby database Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 29 seconds ago) DGMGRL> |
And use a 19c new feature to check the lag of the configuration
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
DGMGRL> show configuration lag Configuration - dgdemo Protection Mode: MaxPerformance Members: dgdemovm1 - Primary database dgdemovm2 - Physical standby database Transport Lag: 0 seconds (computed 0 seconds ago) Apply Lag: 0 seconds (computed 0 seconds ago) Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 48 seconds ago) DGMGRL> |
So folks, that’s all for now.
Try this before you do it, to get it ready in the head and the fingers and then Happy upgrading!
As always, questions, remarks? find me on twitter @vanpupi
2 thoughts on “A walkthrough the Transient Logical Standby”
may i ask why you flashback the database using the restore point ?
Because you need to be again to a certain point in time to be consistent in order to make managed recovery to catch up again.