Thinking Out Loud

March 28, 2020

Silent Install 11.2.0.4 DB Software With GI 18c On OEL 7.7

Filed under: 11g,18c,Grid Infrastructure,OEL7 — mdinh @ 8:45 pm

Just some note:

One good thing about GUI install is that it allows one to fix any issues and retry and not so much with silent install

================================================================================
Requirements for Installing Oracle 11.2.0.4 RDBMS on OL7 or RHEL7 64-bit (x86-64) (Doc ID 1962100.1)	

PRVF-4037 : CRS is not installed on any of the nodes (Doc ID 1316815.1)	

Installation of Oracle 11.2.0.4 Database Software on OL7 fails with 'Error in invoking target 'agent nmhs' of makefile ' & 
"undefined reference to symbol 'B_DestroyKeyObject'" error (Doc ID 1965691.1)	
================================================================================


================================================================================
### First install attempt without -ignorePrereq
================================================================================

$ ./runInstaller -ignorePrereq

Note that the above command does not perform any pre-requisite checks. 
Hence, ensure that all the software requirements documented in the install guide are fulfilled before executing the installer using the above option.

================================================================================

[oracle@ol7-183-rac1 ~]$ ./install_db_software.sh

+ /u01/app/oracle/software/database/runInstaller -force -silent -waitforcompletion
-responseFile /u01/app/oracle/software/database/response/db_install.rsp 
oracle.install.option=INSTALL_DB_SWONLY 
ORACLE_HOSTNAME=ol7-183-rac1.localdomain 
UNIX_GROUP_NAME=oinstall 
INVENTORY_LOCATION=/u01/app/oraInventory 
SELECTED_LANGUAGES=en ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 
ORACLE_BASE=/u01/app/oracle 
oracle.install.db.InstallEdition=EE 
oracle.install.db.EEOptionsSelection=false 
oracle.install.db.DBA_GROUP=dba 
oracle.install.db.OPER_GROUP=oper 
oracle.install.db.CLUSTER_NODES=ol7-183-rac1,ol7-183-rac2 
oracle.installer.autoupdates.option=SKIP_UPDATES 
oracle.install.db.isRACOneInstall=false 
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false 
DECLINE_SECURITY_UPDATES=true

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 25005 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 17391 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-03-26_04-15-06PM. Please wait ...

[FATAL] [INS-13013] Target environment do not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log. 
   Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
[oracle@ol7-183-rac1 ~]$


================================================================================
### Review types of errors
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -e '[[:upper:]]: ' /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log |cut -d ":" -f1 |sort -u
   ACTION
   CAUSE
INFO
SEVERE
WARNING
[oracle@ol7-183-rac1 ~]$


================================================================================
### Review List of failed Tasks
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -A100 "List of failed Tasks" /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
INFO: ------------------List of failed Tasks------------------
INFO: *********************************************
INFO: Package: pdksh-5.2.14: This is a prerequisite condition to test whether the package "pdksh-5.2.14" is available on the system.
INFO: Severity:IGNORABLE
INFO: OverallStatus:VERIFICATION_FAILED
INFO: *********************************************
INFO: CRS Integrity: This test checks the integrity of Oracle Clusterware stack across the cluster nodes.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Cluster Manager Integrity: This test checks the integrity of cluster manager across the cluster nodes.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Node Application Existence: This test checks the existence of Node Applications on the system.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Clock Synchronization: This test checks the Oracle Cluster Time Synchronization Services across the cluster nodes.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Database Clusterware Version Compatibility: This test ensures that the Database version is compatible with the CRS version.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: -----------------End of failed Tasks List----------------
INFO: Adding ExitStatus PREREQUISITES_NOT_MET to the exit status set
SEVERE: [FATAL] [INS-13013] Target environment do not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
INFO: Advice is ABORT
INFO: Adding ExitStatus INVALID_USER_INPUT to the exit status set
INFO: Completed validating state {performChecks}
INFO: Terminating all background operations
INFO: Terminated all background operations
INFO: Finding the most appropriate exit status for the current application
INFO: Exit Status is -3
INFO: Shutdown Oracle Database 11g Release 2 Installer
[oracle@ol7-183-rac1 ~]$


================================================================================
### Search for "Error Message"
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -i 'error message' /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
INFO: Error Message:PRVF-7532 : Package "pdksh" is missing on node "ol7-183-rac2"
INFO: Error Message:PRVF-7532 : Package "pdksh" is missing on node "ol7-183-rac1"
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
[oracle@ol7-183-rac1 ~]$


================================================================================
PRVF-4037 : CRS is not installed on any of the nodes (Doc ID 1316815.1)	
The bug is fixed in 11.2.0.3, the workaround is to update GI home with CRS="true" flag.
================================================================================


================================================================================
### Check inventory for GI RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ cat inventory.xml
(?xml version="1.0" standalone="yes" ?)
(!-- Copyright (c) 1999, 2020, Oracle and/or its affiliates.
All rights reserved. --)
(!-- Do not modify the contents of this file by hand. --)
(INVENTORY)
(VERSION_INFO)
   (SAVED_WITH)12.2.0.4.0(/SAVED_WITH)
   (MINIMUM_VER)2.1.0.6.0(/MINIMUM_VER)
(/VERSION_INFO)
(HOME_LIST)
(HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="1" CRS="true"/)
(/HOME_LIST)
(COMPOSITEHOME_LIST)
(/COMPOSITEHOME_LIST)
(/INVENTORY)


================================================================================
### UPDATE inventory for GI RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ . oraenv {{{ +ASM1
ORACLE_SID = [cdbrac1] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-183-rac1 ContentsXML]$ export GRID_HOME=$ORACLE_HOME

[oracle@ol7-183-rac1 ContentsXML]$ $GRID_HOME/oui/bin/runInstaller -silent -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={ol7-183-rac1,ol7-183-rac2}" CRS=true
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 17391 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.


================================================================================
### VERIFY inventory for GI RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ cat inventory.xml
(?xml version="1.0" standalone="yes" ?)
(!-- Copyright (c) 1999, 2020, Oracle and/or its affiliates.
All rights reserved. --)
(!-- Do not modify the contents of this file by hand. --)
(INVENTORY)
(VERSION_INFO)
   (SAVED_WITH)12.2.0.4.0(/SAVED_WITH)
   (MINIMUM_VER)2.1.0.6.0(/MINIMUM_VER)
(/VERSION_INFO)
(HOME_LIST)
(HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="1" CRS="true")
   (NODE_LIST)
      (NODE NAME="ol7-183-rac1"/)
      (NODE NAME="ol7-183-rac2"/)
   (/NODE_LIST)
(/HOME)
(/HOME_LIST)
(COMPOSITEHOME_LIST)
(/COMPOSITEHOME_LIST)
(/INVENTORY'
[oracle@ol7-183-rac1 ContentsXML]$


================================================================================
### Retry Install
================================================================================

[oracle@ol7-183-rac1 ~]$ cat install_db_software.sh
#!/bin/sh -x
/u01/app/oracle/software/database/runInstaller -force \
-silent -waitforcompletion -ignorePrereq \
-responseFile /u01/app/oracle/software/database/response/db_install.rsp \
oracle.install.option=INSTALL_DB_SWONLY \
ORACLE_HOSTNAME=ol7-183-rac1.localdomain \
UNIX_GROUP_NAME=oinstall \
INVENTORY_LOCATION=/u01/app/oraInventory \
SELECTED_LANGUAGES=en \
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 \
ORACLE_BASE=/u01/app/oracle \
oracle.install.db.InstallEdition=EE \
oracle.install.db.EEOptionsSelection=false \
oracle.install.db.DBA_GROUP=dba \
oracle.install.db.OPER_GROUP=oper \
oracle.install.db.CLUSTER_NODES=ol7-183-rac1,ol7-183-rac2 \
oracle.installer.autoupdates.option=SKIP_UPDATES \
oracle.install.db.isRACOneInstall=false \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true
[oracle@ol7-183-rac1 ~]$


[oracle@ol7-183-rac1 ~]$ ./install_db_software.sh
+ /u01/app/oracle/software/database/runInstaller -force -silent -waitforcompletion -ignorePrereq 
-responseFile /u01/app/oracle/software/database/response/db_install.rsp 
oracle.install.option=INSTALL_DB_SWONLY 
ORACLE_HOSTNAME=ol7-183-rac1.localdomain 
UNIX_GROUP_NAME=oinstall 
INVENTORY_LOCATION=/u01/app/oraInventory 
SELECTED_LANGUAGES=en 
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 
ORACLE_BASE=/u01/app/oracle 
oracle.install.db.InstallEdition=EE 
oracle.install.db.EEOptionsSelection=false 
oracle.install.db.DBA_GROUP=dba 
oracle.install.db.OPER_GROUP=oper 
oracle.install.db.CLUSTER_NODES=ol7-183-rac1,ol7-183-rac2 
oracle.installer.autoupdates.option=SKIP_UPDATES 
oracle.install.db.isRACOneInstall=false 
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false 
DECLINE_SECURITY_UPDATES=true

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 24578 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 17391 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-03-26_05-17-28PM. Please wait ...

You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2020-03-26_05-17-28PM.log

The installation of Oracle Database 11g was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2020-03-26_05-17-28PM.log' for more details.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh

Execute /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh on the following nodes:
[ol7-183-rac1, ol7-183-rac2]

Successfully Setup Software.
[oracle@ol7-183-rac1 ~]$


[root@ol7-183-rac1 ~]# /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh
Check /u01/app/oracle/product/11.2.0.4/dbhome_1/install/root_ol7-183-rac1.localdomain_2020-03-26_17-44-13.log for the output of root script
[root@ol7-183-rac1 ~]#


[root@ol7-183-rac2 ~]# /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh
Check /u01/app/oracle/product/11.2.0.4/dbhome_1/install/root_ol7-183-rac2.localdomain_2020-03-26_17-44-55.log for the output of root script
[root@ol7-183-rac2 ~]#


================================================================================
### FROM silentInstall*.log - Known Issues - (Doc ID 1965691.1)	
================================================================================

[oracle@ol7-183-rac1 ~]$ cat /u01/app/oraInventory/logs/silentInstall2020-03-26_05-17-28PM.log
silentInstall2020-03-26_05-17-28PM.log
sNativeVolName:/u01/app/oracle/product/11.2.0.4/dbhome_1/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
sNativeVolName:/tmp/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
Error in invoking target 'agent nmhs' of makefile '/u01/app/oracle/product/11.2.0.4/dbhome_1/sysman/lib/ins_emagent.mk'. See '/u01/app/oraInventory/logs/installActions2020-03-26_05-17-28PM.log' for details.
sNativeVolName:/u01/app/oracle/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
sNativeVolName:/u01/app/oraInventory/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
The installation of Oracle Database 11g was successful.
[oracle@ol7-183-rac1 ~]$


================================================================================
### Check installActions*.log
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -e '[[:upper:]]: ' /u01/app/oraInventory/logs/installActions2020-03-26_05-17-28PM.log |cut -d ":" -f1 |sort -u
INFO
WARNING
[oracle@ol7-183-rac1 ~]$


================================================================================
### Check inventory for DB RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ cat inventory.xml
{?xml version="1.0" standalone="yes" ?}
{!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. --}
{!-- Do not modify the contents of this file by hand. --}
{INVENTORY}
{VERSION_INFO}
   {SAVED_WITH}11.2.0.4.0{/SAVED_WITH}
   {MINIMUM_VER}2.1.0.6.0{/MINIMUM_VER}
{/VERSION_INFO}
{HOME_LIST}
{HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="1" CRS="true"}
   {NODE_LIST}
      {NODE NAME="ol7-183-rac1"/}
      {NODE NAME="ol7-183-rac2"/}
   {/NODE_LIST}
{/HOME}
{HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0.4/dbhome_1" TYPE="O" IDX="2"}
   {NODE_LIST}
      {NODE NAME="ol7-183-rac1"/}
      {NODE NAME="ol7-183-rac2"/}
   {/NODE_LIST}
{/HOME}
{/HOME_LIST}
{COMPOSITEHOME_LIST}
{/COMPOSITEHOME_LIST}
{/INVENTORY}
[oracle@ol7-183-rac1 ContentsXML]$


================================================================================
### cluvfy comp healthcheck
================================================================================

[oracle@ol7-183-rac1 cvu]$ . oraenv <<< +ASM1
ORACLE_SID = [cdbrac1] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-183-rac1 ~]$ cluvfy comp software

Verification of Health Check was unsuccessful.
Checks did not pass for the following nodes:
        ol7-183-rac2,ol7-183-rac1


Failures were encountered during execution of CVU verification request "Health Check".

Verifying Physical Memory ...FAILED
ol7-183-rac2: PRVF-7530 : Sufficient physical memory is not available on node
              "ol7-183-rac2" [Required physical memory = 8GB (8388608.0KB)]

ol7-183-rac1: PRVF-7530 : Sufficient physical memory is not available on node
              "ol7-183-rac1" [Required physical memory = 8GB (8388608.0KB)]

Verifying Ethernet Jumbo Frames ...FAILED
ol7-183-rac2: PRVE-0293 : Jumbo Frames are not configured for interconnects
              "eth2" on node "ol7-183-rac2.localdomain". [Expected="eth2=9000";
              Found="eth2=1500"]

ol7-183-rac1: PRVE-0293 : Jumbo Frames are not configured for interconnects
              "eth2" on node "ol7-183-rac1.localdomain". [Expected="eth2=9000";
              Found="eth2=1500"]


CVU operation performed:      Health Check
Date:                         Mar 26, 2020 6:07:08 PM
CVU home:                     /u01/app/18.0.0/grid/
User:                         oracle
[oracle@ol7-183-rac1 cvu]$

September 5, 2018

tnsping for DataGuard

Filed under: 11g,Dataguard — mdinh @ 10:56 pm

I am preparing Dataguard for switchover with 1 primary and 3 standbys and should be able to tnsping all the services from log_archive_config=DG_CONFIG=(HAWKA,HAWKB,HAWKC)

Not sure how valuable this may be for you as I wanted to perform all the tasks in one command and know where the error is at.

tnsping HAWKC is failing in the 2nd position.

[oracle@db-fs-1 ~]$ { tnsping HAWKA & tnsping HAWKC & tnsping HAWKB & echo ; } > /tmp/tnsping_`hostname -s`; ls -l /tmp/tnsping_`hostname -s`
[1] 18375
[2] 18376
[3] 18377
-rw-r--r-- 1 oracle oinstall 1208 Sep  6 00:45 /tmp/tnsping_db-fs-1
[1]   Done                    tnsping HAWKA
[2]-  Exit 1                  tnsping HAWKC
[3]+  Done                    tnsping HAWKB
[oracle@db-fs-1 ~]$

tnsping HAWKC is failing in the 3rd position.

[oracle@db-fs-1 ~]$ { tnsping HAWKA & tnsping HAWKB & tnsping HAWKC & echo $? ; } > /tmp/tnsping_`hostname -s`; ls -l /tmp/tnsping_`hostname -s`
[1] 18433
[2] 18434
[3] 18435
-rw-r--r-- 1 oracle oinstall 1210 Sep  6 00:46 /tmp/tnsping_db-fs-1
[1]   Done                    tnsping HAWKA
[2]-  Done                    tnsping HAWKB
[3]+  Exit 1                  tnsping HAWKC
[oracle@db-fs-1 ~]$

tnsping HAWKC is failing in the 3rd position.
There were 3 processes spawned and had to press enter to get final results.

[oracle@db-fs-1 ~]$ { tnsping HAWKA & tnsping HAWKB & tnsping HAWKC & echo $? ; } > /tmp/tnsping_`hostname -s`; ls -l /tmp/tnsping_`hostname -s`
[1] 18469
[2] 18470
[3] 18471
-rw-r--r-- 1 oracle oinstall 837 Sep  6 00:47 /tmp/tnsping_db-fs-1
[1]   Done                    tnsping HAWKA
[3]+  Exit 1                  tnsping HAWKC
[oracle@db-fs-1 ~]$
[2]+  Done                    tnsping HAWKB
[oracle@db-fs-1 ~]$

Remove tnsping HAWKC to demo working results.

[oracle@db-fs-1 ~]$ { tnsping HAWKA & tnsping HAWKB & echo ; } > /tmp/tnsping_`hostname -s`; ls -l /tmp/tnsping_`hostname -s`
[1] 18500
[2] 18501
-rw-r--r-- 1 oracle oinstall 955 Sep  6 00:47 /tmp/tnsping_db-fs-1
[1]-  Done                    tnsping HAWKA
[2]+  Done                    tnsping HAWKB
[oracle@db-fs-1 ~]$

UPDATE:
Another preference suggested is to use for loops.

for s in "HAWKA" "HAWKB" "HAWKC"
do
echo $s
tnsping $s >> /tmp/log
done

August 24, 2018

RMAN: Synchronize standby database using production archivelog backupset

Filed under: 11g,Dataguard,RMAN — mdinh @ 3:07 am

If you have not read RMAN: Synchronize standby database using production archivelog, then please do so.

# Primary archivelog is on local vs shared storage.
# Primary RMAN archivelog backupset resides on shared folder with Standby.
# Full backup is performed once per day and include archivelog with format arch_DB02_`date '+%Y%m%d'
# MANAGED REAL TIME APPLY is running.
PRI: /shared/prod/DB02/rman/
SBY: /shared/backup/arch/DB02a/

#!/bin/sh -e
# Michael Dinh: Aug 21, 2018
# RMAN sync standby using production archivelog backupset
#
. ~/working/dinh/dinh.env
. ~/working/dinh/DB02a.env
sysresv|tail -1
set -x
# List production archivelog backupset for current day
ls -l /shared/prod/DB02/rman/arch_DB02_`date '+%Y%m%d'`*
# Copy production archivelog backupset for current day to standby
cp -ufv /shared/prod/DB02/rman/arch_DB02_`date '+%Y%m%d'`* /shared/backup/arch/DB02a
rman msglog /tmp/rman_sync_standby.log > /dev/null << EOF
set echo on;
connect target;
show all;
# Catalog production archivelog backupset from standby
catalog start with '/shared/backup/arch/DB02a' noprompt;
# Restore production archivelog backupset to standby
restore archivelog from time 'trunc(sysdate)-1';
exit
EOF
sleep 15m
# Verify Media Recovery Log from alert log
tail -20 $ORACLE_BASE/diag/rdbms/$ORACLE_UNQNAME/$ORACLE_SID/trace/alert_$ORACLE_SID.log
exit
$ crontab -l
00 12 * * * /home/oracle/working/dinh/rman_sync_standby.sh > /tmp/rman_sync_standby.sh.out 2>&1

$ ll /tmp/rman*
-rw-r--r--. 1 oracle oinstall 7225 Aug 22 12:01 /tmp/rman_sync_standby.log
-rw-r--r--. 1 oracle oinstall 4318 Aug 22 12:16 /tmp/rman_sync_standby.sh.out

+ tail -20 /u01/app/oracle/diag/rdbms/DB02a/DB02a2/trace/alert_DB02a2.log
ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  
ORA-1153 signalled during: ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  ...
Tue Aug 21 15:41:27 2018
Using STANDBY_ARCHIVE_DEST parameter default value as USE_DB_RECOVERY_FILE_DEST
Tue Aug 21 15:54:30 2018
db_recovery_file_dest_size of 204800 MB is 21.54% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Aug 22 12:01:21 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31636.1275.984830461
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31637.1276.984830461
Wed Aug 22 12:01:46 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31638.1278.984830487
Wed Aug 22 12:01:58 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31639.1277.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31640.1279.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31641.1280.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31642.1281.984830489
Media Recovery Waiting for thread 1 sequence 31643
+ exit

# Manual recovery: WAIT_FOR_LOG and BLOCK#=0 and never increment.
SQL> r
  1  select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where 1=1
  4  and status not in ('CLOSING','IDLE','CONNECTED')
  5  order by status desc, thread#, sequence#
  6*

                        CLIENT                                               DELAY
     PID  INST  THREAD# PROCESS    PROCESS  STATUS       SEQUENCE#   BLOCK#   MINS
-------- ----- -------- ---------- -------- ------------ --------- -------- ------
   94734     2        1 N/A        MRP0     WAIT_FOR_LOG     31643        0      0

SQL>
$ cat /tmp/rman_sync_standby.log 

Recovery Manager: Release 11.2.0.4.0 - Production on Wed Aug 22 12:00:58 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> 
echo set on

RMAN> connect target;
connected to target database: DB02 (DBID=1816794213, not open)

RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name DB02A are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 7;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE 'SBT_TAPE' BACKUP TYPE TO BACKUPSET PARALLELISM 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/u01/app/oracle/product/11.2.0/dbhome_1/lib/libddobk.so, ENV=(STORAGE_UNIT=dd-u99,BACKUP_HOST=dd860.ccx.carecentrix.com,ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1)';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/db/11.2.0.4/dbs/snapcf_DB02a2.f'; # default

RMAN> catalog start with '/shared/backup/arch/DB02a' noprompt;
searching for all files that match the pattern /shared/backup/arch/DB02a

List of Files Unknown to the Database
=====================================
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1
File Name: /shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1

RMAN> restore archivelog from time 'trunc(sysdate)-1';
Starting restore at 22-AUG-2018 12:01:00
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=285 instance=DB02a2 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=3 instance=DB02a2 device type=DISK

archived log for thread 1 with sequence 31630 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31630.496.984755257
archived log for thread 1 with sequence 31631 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31631.497.984755273
archived log for thread 1 with sequence 31632 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31632.498.984755273
archived log for thread 1 with sequence 31633 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31633.499.984755275
archived log for thread 1 with sequence 31634 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31634.500.984755275
archived log for thread 1 with sequence 31635 is already on disk as file +FRA/DB02a/archivelog/2018_08_21/thread_1_seq_31635.501.984755275
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=31636
channel ORA_DISK_1: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31637
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1
channel ORA_DISK_1: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1 tag=TAG20180822T110121
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=31638
channel ORA_DISK_1: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1 tag=TAG20180822T110121
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:25
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31639
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1 tag=TAG20180822T113906
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31640
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31641
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1 tag=TAG20180822T113906
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=31642
channel ORA_DISK_2: reading from backup piece /shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1
channel ORA_DISK_2: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1 tag=TAG20180822T113906
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: piece handle=/shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1 tag=TAG20180822T110121
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:17
Finished restore at 22-AUG-2018 12:01:44

RMAN> exit
$ cat /tmp/rman_sync_standby.sh.out 
ORACLE_SID = [oracle] ? The Oracle base has been set to /u01/app/oracle
Oracle Instance alive for sid "DB02a2"
CURRENT_INSTANCE=DB02a2
ORACLE_UNQNAME=DB02a
OTHER_INSTANCE=DB02a3,DB02a4
ORACLE_SID=DB02a2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/db/11.2.0.4
NLS_DATE_FORMAT=DD-MON-YYYY HH24:MI:SS
Oracle Instance alive for sid "DB02a2"
++ date +%Y%m%d
+ ls -l /shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1
-rw-r-----. 1 oracle dba 1900124160 Aug 22 11:02 /shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1
-rw-r-----. 1 oracle dba 1938098176 Aug 22 11:02 /shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1
-rw-r-----. 1 oracle dba 1370842112 Aug 22 11:01 /shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1
-rw-r-----. 1 oracle dba   11870720 Aug 22 11:39 /shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1
-rw-r-----. 1 oracle dba       3584 Aug 22 11:39 /shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1
-rw-r-----. 1 oracle dba       3072 Aug 22 11:39 /shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1
++ date +%Y%m%d
+ cp -ufv /shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1 /shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1 /shared/backup/arch/DB02a
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_9ttb6h01_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_9ttb6h01_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_9utb6h02_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_9utb6h02_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_9vtb6h02_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_9vtb6h02_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_a9tb6j6q_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_a9tb6j6q_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_aatb6j6q_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_aatb6j6q_1_1\u2019
\u2018/shared/prod/DB02/rman/arch_DB02_20180822_abtb6j6q_1_1\u2019 -> \u2018/shared/backup/arch/DB02a/arch_DB02_20180822_abtb6j6q_1_1\u2019
+ rman msglog /tmp/rman_sync_standby.log
+ sleep 15m
+ tail -20 /u01/app/oracle/diag/rdbms/DB02a/DB02a2/trace/alert_DB02a2.log
ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  
ORA-1153 signalled during: ALTER DATABASE RECOVER  managed standby database using current logfile nodelay disconnect  ...
Tue Aug 21 15:41:27 2018
Using STANDBY_ARCHIVE_DEST parameter default value as USE_DB_RECOVERY_FILE_DEST
Tue Aug 21 15:54:30 2018
db_recovery_file_dest_size of 204800 MB is 21.54% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Aug 22 12:01:21 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31636.1275.984830461
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31637.1276.984830461
Wed Aug 22 12:01:46 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31638.1278.984830487
Wed Aug 22 12:01:58 2018
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31639.1277.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31640.1279.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31641.1280.984830487
Media Recovery Log +FRA/DB02a/archivelog/2018_08_22/thread_1_seq_31642.1281.984830489
Media Recovery Waiting for thread 1 sequence 31643
+ exit

August 21, 2018

RMAN: Synchronize standby database using production archivelog

Filed under: 11g,Dataguard,RMAN — mdinh @ 11:39 pm

I know what you are thinking, “Why is this nut of a DBA writing shell script to synchronize standby with achivelog !”

It just happens the environment is very restrictive and NO changes can be made without any change control.

In one week, there’s a planned switchover to RAC standby and it would be nice to have standby duplicated and ready for switchover.

How is this going to work as standby will lag for days until switchover?

Have no fear, there’s a script for that.

# Primary archivelog resides on shared folder with Standby.
# MANAGED REAL TIME APPLY is running.
PRI: /shared/prod/DB01/arch/
SBY: /shared/backup/arch/DB01/

#!/bin/sh 
# rman_cat_arc.sh
# Michael Dinh Aug 21, 2018
#
# Don't forget to set environment here.
#
set -x
# list 5 most recent achivelog
ls -lrt /shared/prod/DB01/arch/|tail -5
# copy archivelog created in last 1 hour since cron runs script every 1 hour.
/bin/find /shared/prod/DB01/arch/ -type f -mmin -60 -exec cp -ufv {} /shared/backup/arch/DB01/ \;
# 
rman msglog /tmp/rman_cat_arc.log > /dev/null << EOF
set echo on;
connect target;
# delete achivelog older than 3 hours
delete force noprompt archivelog until time 'sysdate-3/24';
catalog start with '/shared/backup/arch/DB01/' noprompt;
EOF
# Review alert log
tail -20f $ORACLE_BASE/diag/rdbms/$ORACLE_UNQNAME/$ORACLE_SID/trace/alert_$ORACLE_SID.log
exit
 
# crontab
26 * * * * /home/oracle/rman_cat_arc.sh > /tmp/rman_cat_arc.sh.out 2>&1

# logs
$ ll /tmp/rman*
-rw-r--r--. 1 oracle oinstall 2664 Aug 21 11:26 /tmp/rman_cat_arc.log
-rw-r--r--. 1 oracle oinstall 1852 Aug 21 11:26 /tmp/rman_cat_arc.sh.out

# alert log 
Tue Aug 21 08:26:09 2018
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86391.arc
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86392.arc
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86393.arc
Media Recovery Waiting for thread 1 sequence 86394
Tue Aug 21 09:26:06 2018
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86394.arc
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86395.arc
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86396.arc
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86397.arc
Media Recovery Waiting for thread 1 sequence 86398
Tue Aug 21 10:26:08 2018
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86398.arc
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86399.arc
Media Recovery Waiting for thread 1 sequence 86400
Tue Aug 21 11:26:06 2018
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86400.arc
Media Recovery Log /shared/backup/arch/DB01/DB01_1_717897269_86401.arc
Media Recovery Waiting for thread 1 sequence 86402
Tue Aug 21 11:49:03 2018

select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
from gv$managed_standby
where 1=1
and status not in ('CLOSING','IDLE','CONNECTED')
order by status desc, thread#, sequence#
;

# Manual recovery: WAIT_FOR_LOG and BLOCK#=0 and never increment.
*** gv$managed_standby ***
                        CLIENT                                               DELAY
     PID  INST  THREAD# PROCESS    PROCESS  STATUS       SEQUENCE#   BLOCK#   MINS
-------- ----- -------- ---------- -------- ------------ --------- -------- ------
  411795     4        1 N/A        MRP0     WAIT_FOR_LOG     86178        0      0
  
# MANAGED REAL TIME APPLY: APPLYING_LOG and BLOCK#>0 increments
*** gv$managed_standby ***
                        CLIENT                                               DELAY
     PID  INST  THREAD# PROCESS    PROCESS  STATUS       SEQUENCE#   BLOCK#   MINS
-------- ----- -------- ---------- -------- ------------ --------- -------- ------
  245652     4        1 N/A        MRP0     APPLYING_LOG     86410      472      0  

November 26, 2017

RMAN Backup To FRA Repercussions

Filed under: 10g,11g,12c,RMAN — mdinh @ 3:50 pm

Common advice is to backup to FRA.
Before following advice, evaluate to determine fit and understand any repercussions.
Doesn’t this potentially create SPOF and may require restore from tape unnecessarily?

HINT:

Make sure the following commands are part of backup when backup to FRA.

CONFIGURE CHANNEL DEVICE TYPE DISK CLEAR;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;

DEMO:

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Nov 26 16:02:17 2017

RMAN> show controlfile autobackup;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP ON;

RMAN> show controlfile autobackup format;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default

RMAN> backup datafile 1;

Starting backup at 26-NOV-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/HAWK/DATAFILE/system.258.960967651
channel ORA_DISK_1: starting piece 1 at 26-NOV-17
channel ORA_DISK_1: finished piece 1 at 26-NOV-17
piece handle=+FRA/HAWK/BACKUPSET/2017_11_26/nnndf0_tag20171126t160327_0.274.961085007 tag=TAG20171126T160327 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 26-NOV-17

--- Control File and SPFILE Autobackup to FRA
Starting Control File and SPFILE Autobackup at 26-NOV-17
piece handle=+FRA/HAWK/AUTOBACKUP/2017_11_26/s_961085014.275.961085015 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-17

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
new RMAN configuration parameters are successfully stored

RMAN> show controlfile autobackup format;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
--- CONTROLFILE AUTOBACKUP FORMAT is same but ***NOT*** DEFAULT
RMAN> backup datafile 1;

Starting backup at 26-NOV-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/HAWK/DATAFILE/system.258.960967651
channel ORA_DISK_1: starting piece 1 at 26-NOV-17
channel ORA_DISK_1: finished piece 1 at 26-NOV-17
piece handle=+FRA/HAWK/BACKUPSET/2017_11_26/nnndf0_tag20171126t160655_0.276.961085215 tag=TAG20171126T160655 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 26-NOV-17

--- Control File and SPFILE Autobackup to ***DISK***
Starting Control File and SPFILE Autobackup at 26-NOV-17
piece handle=/u01/app/oracle/12.1.0.2/db1/dbs/c-3219666184-20171126-01 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-17

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;

old RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
RMAN configuration parameters are successfully reset to default value

RMAN> show controlfile autobackup format;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default <-- 

RMAN> backup datafile 1 FORMAT '%d_%I_%T_%U';

Starting backup at 26-NOV-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/HAWK/DATAFILE/system.258.960967651
channel ORA_DISK_1: starting piece 1 at 26-NOV-17
channel ORA_DISK_1: finished piece 1 at 26-NOV-17
piece handle=/u01/app/oracle/12.1.0.2/db1/dbs/HAWK_3219666184_20171126_0oski093_1_1 tag=TAG20171126T161531 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 26-NOV-17

Starting Control File and SPFILE Autobackup at 26-NOV-17
piece handle=+FRA/HAWK/AUTOBACKUP/2017_11_26/s_961085738.277.961085739 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-17

RMAN>

REFERENCE:

How to KEEP a backup created in the Flash Recovery Area (FRA)? (Doc ID 401163.1)	
A backup needed to be KEPT, must be created outside the flash recovery area.

Why are backups going to $ORACLE_HOME/dbs rather than Flash recovery area via Rman or EM Grid control /FRA not considering Archivelog part of it (Doc ID 404854.1)
 1. Do not use a FORMAT clause on backup commands.
 
RMAN Uses Flash Recovery Area for Autobackup When Using Format '%F' (Doc ID 338483.1)	 

October 28, 2017

Use ORACLE_UNQNAME for DataGuard Environment

Filed under: 11g,Dataguard — mdinh @ 2:25 pm

If you are running only 1 database on the host, then it may not be useful.

However, if you run multiple databases, then it makes it easier to automate provided there are consistencies and/or conventions.

DB configuration

HOST01:(SYS@qa):PHYSICAL STANDBY> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      qa
db_unique_name                       string      qadr

OS configuration

$ env|grep ORACLE
ORACLE_BASE=/u01/app/oracle
ORACLE_SID=qa
ORACLE_UNQNAME=qadr
ORACLE_HOME=/u01/app/oracle/db/11g
$ ps -ef|grep pmon
  oracle  9896050        1   0 16:11:12      -  0:03 asm_pmon_+ASM
  oracle 10354862        1   0 20:06:31      -  0:02 ora_pmon_qa

Check DB status using srvctl

srvctl status database -d $ORACLE_UNQNAME -v
Database qadr is running with online services qarosvc
#!/bin/sh -e
. /opt/oracle/oracle_qa_env
dgmgrl -echo << END
connect /
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
exit
$ ./d.sh
DGMGRL for IBM/AIX RISC System/6000: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
Connected.
DGMGRL> show configuration

Configuration - dgqa

  Protection Mode: MaxPerformance
  Databases:
    qa   - Primary database
    qadr - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database qa

Database - qa

  Enterprise Manager Name: qa_cluster
  Role:                    PRIMARY
  Intended State:          TRANSPORT-ON
  Instance(s):
    qa_1
    qa_2

Database Status:
SUCCESS

DGMGRL> show database qadr

Database - qadr

  Enterprise Manager Name: qa1
  Role:                    PHYSICAL STANDBY
  Intended State:          APPLY-ON
  Transport Lag:           0 seconds (computed 0 seconds ago)
  Apply Lag:               0 seconds (computed 1 second ago)
  Apply Rate:              937.00 KByte/s
  Real Time Query:         ON
  Instance(s):
    qa

Database Status:
SUCCESS

DGMGRL> exit

crsctl stat res -w “STATE = ONLINE”|egrep “db$|TYPE=ora.database.type”

NAME=ora.qadr.db
TYPE=ora.database.type
NAME=ora.qa2dr.db
TYPE=ora.database.type
NAME=ora.stageqadr.db
TYPE=ora.database.type
NAME=ora.testdr.db
TYPE=ora.database.type

dg_show.sh

#!/bin/sh -e
. /opt/oracle/oracle_qa_env
dgmgrl -echo << END
connect /
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
. /opt/oracle/oracle_qa2_env
dgmgrl -echo << END
connect /
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
. /opt/oracle/oracle_stageqa_env
dgmgrl -echo << END
connect /
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
. /opt/oracle/oracle_test_env
dgmgrl -echo << END
connect /
show configuration
show database ${ORACLE_SID}
show database ${ORACLE_UNQNAME}
exit
END
exit

Improved dg_show.sh using function.

#!/bin/sh -e
check_dg()
{
  dgmgrl -echo << END
  connect /
  show configuration
  show database ${ORACLE_SID}
  show database ${ORACLE_UNQNAME}
  exit
  END
}
. /opt/oracle/oracle_qa_env
check_dg
. /opt/oracle/oracle_qa2_env
check_dg
. /opt/oracle/oracle_stageqa_env
check_dg
. /opt/oracle/oracle_test_env
check_dg
exit

October 17, 2017

DB Starts with SQLPlus not SRVCTL

Filed under: 11g,oracle — mdinh @ 1:25 am

Reason why DB was able to be started using SQL*Plus and not srvctl because DB was configured incorrectly with srvctl.

$ srvctl start database -d DB01
PRCR-1079 : Failed to start resource ora.db01.db
CRS-5017: The resource action "ora.db01.db start" encountered the following error:
ORA-01078: failure in processing system parameters. 
For details refer to "(:CLSN00107:)" in "/u01/app/oracle/product/11.2.0/grid_2/log/host01/agent/ohasd/oraagent_oracle//oraagent_log".

CRS-2674: Start of 'ora.db01.db' on 'host01' failed

--- Spfile pointing to non-existing pfile.
$ srvctl config database -d DB01
Database unique name: DB01
Database name:
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_2
Oracle user: oracle
Spfile: /oracle/product/11.2.0/dbhome_2/dbs/initDB01.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: DB01
Disk Groups: DATA,FRA
Services:

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DATA/db01/spfiledb01.ora

cat: /oracle/product/11.2.0/dbhome_2/dbs/initDB01.ora: No such file or directory

$ srvctl modify database -d DB01 -p +DATA/db01/spfiledb01.ora
$ srvctl config database -d DB01
Database unique name: DB01
Database name:
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_2
Oracle user: oracle
Spfile: +DATA/db01/spfiledb01.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: DB01
Disk Groups: DATA,FRA
Services:

October 9, 2017

No Guarantees with opatch -report or CheckConflict

Filed under: 11g,oracle — mdinh @ 8:13 pm

I have performed the following checks.

# $GRID_HOME/OPatch/opatch auto /media/swrepo/JUL2017PSU/26030799 -report -ocmrf /tmp/ocm.rsp
$ $GRID_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /media/swrepo/JUL2017PSU/26030799

Actual patching failed.

# $GRID_HOME/OPatch/opatch auto /media/swrepo/JUL2017PSU/26030799 -ocmrf /tmp/ocm.rsp
Executing /u01/app/oracle/product/11.2.0/grid/perl/bin/perl 
/u01/app/oracle/product/11.2.0/grid/OPatch/crs/patch11203.pl 
-patchdir /media/swrepo/JUL2017PSU -patchn 26030799 
-ocmrf /tmp/ocm.rsp -paramfile /u01/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params

This is the main log file: /u01/app/oracle/product/11.2.0/grid/cfgtoollogs/opatchauto2017-10-09_10-35-34.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/app/oracle/product/11.2.0/grid/cfgtoollogs/opatchauto2017-10-09_10-35-34.report.log

2017-10-09 10:35:34: Starting Oracle Restart Patch Setup
Using configuration parameter file: /u01/app/oracle/product/11.2.0/grid/crs/install/crsconfig_params

Stopping RAC /u01/app/oracle/product/11.2.0/dbhome_1 ...
Stopped RAC /u01/app/oracle/product/11.2.0/dbhome_1 successfully

patch /media/swrepo/JUL2017PSU/26030799/25869727  apply successful for home  /u01/app/oracle/product/11.2.0/dbhome_1
patch /media/swrepo/JUL2017PSU/26030799/25920335/custom/server/25920335  apply successful for home  /u01/app/oracle/product/11.2.0/dbhome_1

Stopping CRS...

Stopped CRS successfully

Error : The opatch Applicable check failed.  The patch /media/swrepo/JUL2017PSU/26030799/25920335 is not applicable to /u01/app/oracle/product/11.2.0/grid
Error:Patch Applicable check failed for /u01/app/oracle/product/11.2.0/grid

Starting CRS...

ERROR: Prereq checkApplicable failed. Refer log file for more details.


opatch auto failed.
#

Really useful info – ERROR: Prereq checkApplicable failed. Refer log file for more details.

I digress.

After some digging – search for ZOP-46 from /u01/app/oracle/product/11.2.0/grid/cfgtoollogs/opatch

$ grep -n "ZOP-46" opatch2017-10-09*.log
opatch2017-10-09_10-41-58AM_1.log:13:
[Oct 9, 2017 10:42:00 AM]    ZOP-46: 
The patch(es) are not applicable on the Oracle Home because some patch actions are not applicable. 
All required components, however, are installed.


$ head -25 opatch2017-10-09_10-41-58AM_1.log
[Oct 9, 2017 10:41:59 AM]    PREREQ session

[Oct 9, 2017 10:41:59 AM]    
OPatch invoked as follows: 'prereq CheckApplicable 
-ph /media/swrepo/JUL2017PSU/26030799/25920335 
-oh /u01/app/oracle/product/11.2.0/grid 
-invPtrLoc /u01/app/oracle/product/11.2.0/grid/oraInst.loc '

[Oct 9, 2017 10:41:59 AM]    OUI-67077:
                             Oracle Home       : /u01/app/oracle/product/11.2.0/grid
                             Central Inventory : /u01/app/oracle/oraInventory
                                from           : /u01/app/oracle/product/11.2.0/grid/oraInst.loc
                             OPatch version    : 11.2.0.3.6
                             OUI version       : 11.2.0.4.0
                             OUI location      : /u01/app/oracle/product/11.2.0/grid/oui
                             Log file location : /u01/app/oracle/product/11.2.0/grid/cfgtoollogs/opatch/opatch2017-10-09_10-41-58AM_1.log
[Oct 9, 2017 10:41:59 AM]    Patch history file: /u01/app/oracle/product/11.2.0/grid/cfgtoollogs/opatch/opatch_history.txt
[Oct 9, 2017 10:41:59 AM]    Invoking prereq "checkapplicable"

[Oct 9, 2017 10:42:00 AM]    
ZOP-46: The patch(es) are not applicable on the Oracle Home because some patch actions are not applicable. 
All required components, however, are installed.

[Oct 9, 2017 10:42:00 AM]    Patch 25920335:
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/bin/appvipcfg.pl" does not exists or is not readable
                             'oracle.crs, 11.2.0.4.0': Cannot copy file from 'appvipcfg.pl' to '/u01/app/oracle/product/11.2.0/grid/bin/appvipcfg.pl'
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/bin/oclumon.bin" does not exists or is not readable
                             'oracle.crs, 11.2.0.4.0': Cannot copy file from 'oclumon.bin' to '/u01/app/oracle/product/11.2.0/grid/bin/oclumon.bin'
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/bin/ologgerd" does not exists or is not readable
                             'oracle.crs, 11.2.0.4.0': Cannot copy file from 'ologgerd' to '/u01/app/oracle/product/11.2.0/grid/bin/ologgerd'
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/bin/osysmond.bin" does not exists or is not readable
                             'oracle.crs, 11.2.0.4.0': Cannot copy file from 'osysmond.bin' to '/u01/app/oracle/product/11.2.0/grid/bin/osysmond.bin'
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/crs/demo/coldfailover/act_db.pl" does not exists or is not readable
                             'oracle.crs, 11.2.0.4.0': Cannot copy file from 'act_db.pl' to '/u01/app/oracle/product/11.2.0/grid/crs/demo/coldfailover/act_db.pl'
                             Copy Action: Source File "/media/swrepo/JUL2017PSU/26030799/25920335/files/crs/demo/coldfailover/act_listener.pl" does not exists or is not readable
$ ls -l /media/swrepo/JUL2017PSU/26030799/25920335/files/bin/appvipcfg.pl
-rwxr-x--- 1 root root 9051 Jun 27 07:40 /media/swrepo/JUL2017PSU/26030799/25920335/files/bin/appvipcfg.pl

Please don’t ask me why.

Solution.

# cd /media/
# chmod -R 777 swrepo/
# chown -R oracle:dba patches/

opatch report “ERROR: Prereq checkApplicable failed.” when Applying Grid Infrastructure patch (Doc ID 1417268.1)

	A. Expected behaviour if GRID_HOME has not been unlocked
 	B. Bug 13575478
 	C. The patch is stored in a shared NFS location and there is a permission issue accessing the patch
 	D. The patch is not unzipped as grid user, often it is unzipped as root user
 	E. The patch is unzipped inside GRID_HOME

In summary, trust but verify!

August 4, 2017

Windows Datapump Export

Filed under: 11g,oracle,Windows — mdinh @ 3:45 am
Tags:

The purpose of the script is to perform full database export keeping 3 export copies.
If export is successful, then fullexp*.dmp will be renamed with _1.dmp suffix added to filename.
If export is unsuccessful, then code will exit, skipping rename operations.

Note: there should never be .dmp file without # suffix unless export is unsuccessful.

In hindsight, directoryName should be using variable (ORACLE_SID) versus hardcode.

SET ORACLE_SID=DB01
SET directoryName=D:\DB01\export

expdp ‘/ as sysdba’ full=y directory=DATA_PUMP_DIR dumpfile=fullexp_%ORACLE_SID%_%COMPUTERNAME%.dmp logfile=fullexp_%ORACLE_SID%_%COMPUTERNAME%.log flashback_time=SYSTIMESTAMP REUSE_DUMPFILES=YES
IF %ERRORLEVEL% NEQ 0 GOTO ERROR

IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_3.dmp” (DEL “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_3.*”)

IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.dmp” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.dmp” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_3.dmp”)
IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.log” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.log” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_3.log”)

IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.dmp” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.dmp” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.dmp”)
IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.log” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.log” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_2.log”)

IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%.dmp” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%.dmp” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.dmp”)
IF EXIST “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%.log” (RENAME “%directoryName%\fullexp_%ORACLE_SID%_%COMPUTERNAME%.log” “fullexp_%ORACLE_SID%_%COMPUTERNAME%_1.log”)

EXIT 0

:ERROR
EXIT 1

Results after 4 runs.

08/03/2017  07:53 PM     2,680,008,704 fullexp_DB01_CMWPHV1_1.dmp
08/03/2017  07:53 PM           161,707 fullexp_DB01_CMWPHV1_1.log
08/03/2017  07:46 PM     2,680,008,704 fullexp_DB01_CMWPHV1_2.dmp
08/03/2017  07:46 PM           161,707 fullexp_DB01_CMWPHV1_2.log
08/03/2017  07:37 PM     2,680,008,704 fullexp_DB01_CMWPHV1_3.dmp
08/03/2017  07:37 PM           161,707 fullexp_DB01_CMWPHV1_3.log

April 28, 2017

Bug 18411339 – Low performance V$ARCHIVE_GAP (11.2.0.4) fix 12.2.0.1

Filed under: 11g,12c,Dataguard — mdinh @ 12:31 am

Just came across bug from 11.2.0.4 not fixed until 12.2 base release. Seriously Oracle?
In the test case below, it looks to have only affected 11.2.0.4 – 64bit for AIX Version 7.1 since I recall this was not an issues for Linux.

11.2.0.4.0
select * from v$archive_gap;
Elapsed: 00:01:48.93

12.1.0.2.0
select * from v$archive_gap;
Elapsed: 00:00:06.60

Work Around

select USERENV('Instance'), high.thread#, low.lsq, high.hsq
 from
  (select a.thread#, rcvsq, min(a.sequence#)-1 hsq
   from v$archived_log a,
        (select lh.thread#, lh.resetlogs_change#, max(lh.sequence#) rcvsq
           from v$log_history lh, v$database_incarnation di
          where lh.resetlogs_time = di.resetlogs_time
            and lh.resetlogs_change# = di.resetlogs_change#
            and di.status = 'CURRENT'
            and lh.thread# is not null
            and lh.resetlogs_change# is not null
            and lh.resetlogs_time is not null
         group by lh.thread#, lh.resetlogs_change#
        ) b
   where a.thread# = b.thread#
     and a.resetlogs_change# = b.resetlogs_change#
     and a.sequence# > rcvsq
   group by a.thread#, rcvsq) high,
 (select srl_lsq.thread#, nvl(lh_lsq.lsq, srl_lsq.lsq) lsq
   from
     (select thread#, min(sequence#)+1 lsq
      from
        v$log_history lh, x$kccfe fe, v$database_incarnation di
      where to_number(fe.fecps) <= lh.next_change# and to_number(fe.fecps) >= lh.first_change#
        and fe.fedup!=0 and bitand(fe.festa, 12) = 12
        and di.resetlogs_time = lh.resetlogs_time
        and lh.resetlogs_change# = di.resetlogs_change#
        and di.status = 'CURRENT'
      group by thread#) lh_lsq,
     (select thread#, max(sequence#)+1 lsq
      from
        v$log_history
      where (select min( to_number(fe.fecps))
             from x$kccfe fe
             where fe.fedup!=0 and bitand(fe.festa, 12) = 12)
      >= next_change#
      group by thread#) srl_lsq
   where srl_lsq.thread# = lh_lsq.thread#(+)
  ) low
 where low.thread# = high.thread#
 and lsq < = hsq and hsq > rcvsq;
Next Page »

Blog at WordPress.com.