Thinking Out Loud

March 28, 2020

Silent Install 11.2.0.4 DB Software With GI 18c On OEL 7.7

Filed under: 11g,18c,Grid Infrastructure,OEL7 — mdinh @ 8:45 pm

Just some note:

One good thing about GUI install is that it allows one to fix any issues and retry and not so much with silent install

================================================================================
Requirements for Installing Oracle 11.2.0.4 RDBMS on OL7 or RHEL7 64-bit (x86-64) (Doc ID 1962100.1)	

PRVF-4037 : CRS is not installed on any of the nodes (Doc ID 1316815.1)	

Installation of Oracle 11.2.0.4 Database Software on OL7 fails with 'Error in invoking target 'agent nmhs' of makefile ' & 
"undefined reference to symbol 'B_DestroyKeyObject'" error (Doc ID 1965691.1)	
================================================================================


================================================================================
### First install attempt without -ignorePrereq
================================================================================

$ ./runInstaller -ignorePrereq

Note that the above command does not perform any pre-requisite checks. 
Hence, ensure that all the software requirements documented in the install guide are fulfilled before executing the installer using the above option.

================================================================================

[oracle@ol7-183-rac1 ~]$ ./install_db_software.sh

+ /u01/app/oracle/software/database/runInstaller -force -silent -waitforcompletion
-responseFile /u01/app/oracle/software/database/response/db_install.rsp 
oracle.install.option=INSTALL_DB_SWONLY 
ORACLE_HOSTNAME=ol7-183-rac1.localdomain 
UNIX_GROUP_NAME=oinstall 
INVENTORY_LOCATION=/u01/app/oraInventory 
SELECTED_LANGUAGES=en ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 
ORACLE_BASE=/u01/app/oracle 
oracle.install.db.InstallEdition=EE 
oracle.install.db.EEOptionsSelection=false 
oracle.install.db.DBA_GROUP=dba 
oracle.install.db.OPER_GROUP=oper 
oracle.install.db.CLUSTER_NODES=ol7-183-rac1,ol7-183-rac2 
oracle.installer.autoupdates.option=SKIP_UPDATES 
oracle.install.db.isRACOneInstall=false 
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false 
DECLINE_SECURITY_UPDATES=true

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 25005 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 17391 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-03-26_04-15-06PM. Please wait ...

[FATAL] [INS-13013] Target environment do not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log. 
   Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
[oracle@ol7-183-rac1 ~]$


================================================================================
### Review types of errors
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -e '[[:upper:]]: ' /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log |cut -d ":" -f1 |sort -u
   ACTION
   CAUSE
INFO
SEVERE
WARNING
[oracle@ol7-183-rac1 ~]$


================================================================================
### Review List of failed Tasks
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -A100 "List of failed Tasks" /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
INFO: ------------------List of failed Tasks------------------
INFO: *********************************************
INFO: Package: pdksh-5.2.14: This is a prerequisite condition to test whether the package "pdksh-5.2.14" is available on the system.
INFO: Severity:IGNORABLE
INFO: OverallStatus:VERIFICATION_FAILED
INFO: *********************************************
INFO: CRS Integrity: This test checks the integrity of Oracle Clusterware stack across the cluster nodes.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Cluster Manager Integrity: This test checks the integrity of cluster manager across the cluster nodes.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Node Application Existence: This test checks the existence of Node Applications on the system.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Clock Synchronization: This test checks the Oracle Cluster Time Synchronization Services across the cluster nodes.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Database Clusterware Version Compatibility: This test ensures that the Database version is compatible with the CRS version.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: -----------------End of failed Tasks List----------------
INFO: Adding ExitStatus PREREQUISITES_NOT_MET to the exit status set
SEVERE: [FATAL] [INS-13013] Target environment do not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
INFO: Advice is ABORT
INFO: Adding ExitStatus INVALID_USER_INPUT to the exit status set
INFO: Completed validating state {performChecks}
INFO: Terminating all background operations
INFO: Terminated all background operations
INFO: Finding the most appropriate exit status for the current application
INFO: Exit Status is -3
INFO: Shutdown Oracle Database 11g Release 2 Installer
[oracle@ol7-183-rac1 ~]$


================================================================================
### Search for "Error Message"
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -i 'error message' /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
INFO: Error Message:PRVF-7532 : Package "pdksh" is missing on node "ol7-183-rac2"
INFO: Error Message:PRVF-7532 : Package "pdksh" is missing on node "ol7-183-rac1"
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
[oracle@ol7-183-rac1 ~]$


================================================================================
PRVF-4037 : CRS is not installed on any of the nodes (Doc ID 1316815.1)	
The bug is fixed in 11.2.0.3, the workaround is to update GI home with CRS="true" flag.
================================================================================


================================================================================
### Check inventory for GI RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ cat inventory.xml
(?xml version="1.0" standalone="yes" ?)
(!-- Copyright (c) 1999, 2020, Oracle and/or its affiliates.
All rights reserved. --)
(!-- Do not modify the contents of this file by hand. --)
(INVENTORY)
(VERSION_INFO)
   (SAVED_WITH)12.2.0.4.0(/SAVED_WITH)
   (MINIMUM_VER)2.1.0.6.0(/MINIMUM_VER)
(/VERSION_INFO)
(HOME_LIST)
(HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="1" CRS="true"/)
(/HOME_LIST)
(COMPOSITEHOME_LIST)
(/COMPOSITEHOME_LIST)
(/INVENTORY)


================================================================================
### UPDATE inventory for GI RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ . oraenv {{{ +ASM1
ORACLE_SID = [cdbrac1] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-183-rac1 ContentsXML]$ export GRID_HOME=$ORACLE_HOME

[oracle@ol7-183-rac1 ContentsXML]$ $GRID_HOME/oui/bin/runInstaller -silent -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={ol7-183-rac1,ol7-183-rac2}" CRS=true
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 17391 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.


================================================================================
### VERIFY inventory for GI RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ cat inventory.xml
(?xml version="1.0" standalone="yes" ?)
(!-- Copyright (c) 1999, 2020, Oracle and/or its affiliates.
All rights reserved. --)
(!-- Do not modify the contents of this file by hand. --)
(INVENTORY)
(VERSION_INFO)
   (SAVED_WITH)12.2.0.4.0(/SAVED_WITH)
   (MINIMUM_VER)2.1.0.6.0(/MINIMUM_VER)
(/VERSION_INFO)
(HOME_LIST)
(HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="1" CRS="true")
   (NODE_LIST)
      (NODE NAME="ol7-183-rac1"/)
      (NODE NAME="ol7-183-rac2"/)
   (/NODE_LIST)
(/HOME)
(/HOME_LIST)
(COMPOSITEHOME_LIST)
(/COMPOSITEHOME_LIST)
(/INVENTORY'
[oracle@ol7-183-rac1 ContentsXML]$


================================================================================
### Retry Install
================================================================================

[oracle@ol7-183-rac1 ~]$ cat install_db_software.sh
#!/bin/sh -x
/u01/app/oracle/software/database/runInstaller -force \
-silent -waitforcompletion -ignorePrereq \
-responseFile /u01/app/oracle/software/database/response/db_install.rsp \
oracle.install.option=INSTALL_DB_SWONLY \
ORACLE_HOSTNAME=ol7-183-rac1.localdomain \
UNIX_GROUP_NAME=oinstall \
INVENTORY_LOCATION=/u01/app/oraInventory \
SELECTED_LANGUAGES=en \
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 \
ORACLE_BASE=/u01/app/oracle \
oracle.install.db.InstallEdition=EE \
oracle.install.db.EEOptionsSelection=false \
oracle.install.db.DBA_GROUP=dba \
oracle.install.db.OPER_GROUP=oper \
oracle.install.db.CLUSTER_NODES=ol7-183-rac1,ol7-183-rac2 \
oracle.installer.autoupdates.option=SKIP_UPDATES \
oracle.install.db.isRACOneInstall=false \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true
[oracle@ol7-183-rac1 ~]$


[oracle@ol7-183-rac1 ~]$ ./install_db_software.sh
+ /u01/app/oracle/software/database/runInstaller -force -silent -waitforcompletion -ignorePrereq 
-responseFile /u01/app/oracle/software/database/response/db_install.rsp 
oracle.install.option=INSTALL_DB_SWONLY 
ORACLE_HOSTNAME=ol7-183-rac1.localdomain 
UNIX_GROUP_NAME=oinstall 
INVENTORY_LOCATION=/u01/app/oraInventory 
SELECTED_LANGUAGES=en 
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 
ORACLE_BASE=/u01/app/oracle 
oracle.install.db.InstallEdition=EE 
oracle.install.db.EEOptionsSelection=false 
oracle.install.db.DBA_GROUP=dba 
oracle.install.db.OPER_GROUP=oper 
oracle.install.db.CLUSTER_NODES=ol7-183-rac1,ol7-183-rac2 
oracle.installer.autoupdates.option=SKIP_UPDATES 
oracle.install.db.isRACOneInstall=false 
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false 
DECLINE_SECURITY_UPDATES=true

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 24578 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 17391 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-03-26_05-17-28PM. Please wait ...

You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2020-03-26_05-17-28PM.log

The installation of Oracle Database 11g was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2020-03-26_05-17-28PM.log' for more details.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh

Execute /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh on the following nodes:
[ol7-183-rac1, ol7-183-rac2]

Successfully Setup Software.
[oracle@ol7-183-rac1 ~]$


[root@ol7-183-rac1 ~]# /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh
Check /u01/app/oracle/product/11.2.0.4/dbhome_1/install/root_ol7-183-rac1.localdomain_2020-03-26_17-44-13.log for the output of root script
[root@ol7-183-rac1 ~]#


[root@ol7-183-rac2 ~]# /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh
Check /u01/app/oracle/product/11.2.0.4/dbhome_1/install/root_ol7-183-rac2.localdomain_2020-03-26_17-44-55.log for the output of root script
[root@ol7-183-rac2 ~]#


================================================================================
### FROM silentInstall*.log - Known Issues - (Doc ID 1965691.1)	
================================================================================

[oracle@ol7-183-rac1 ~]$ cat /u01/app/oraInventory/logs/silentInstall2020-03-26_05-17-28PM.log
silentInstall2020-03-26_05-17-28PM.log
sNativeVolName:/u01/app/oracle/product/11.2.0.4/dbhome_1/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
sNativeVolName:/tmp/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
Error in invoking target 'agent nmhs' of makefile '/u01/app/oracle/product/11.2.0.4/dbhome_1/sysman/lib/ins_emagent.mk'. See '/u01/app/oraInventory/logs/installActions2020-03-26_05-17-28PM.log' for details.
sNativeVolName:/u01/app/oracle/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
sNativeVolName:/u01/app/oraInventory/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
The installation of Oracle Database 11g was successful.
[oracle@ol7-183-rac1 ~]$


================================================================================
### Check installActions*.log
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -e '[[:upper:]]: ' /u01/app/oraInventory/logs/installActions2020-03-26_05-17-28PM.log |cut -d ":" -f1 |sort -u
INFO
WARNING
[oracle@ol7-183-rac1 ~]$


================================================================================
### Check inventory for DB RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ cat inventory.xml
{?xml version="1.0" standalone="yes" ?}
{!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. --}
{!-- Do not modify the contents of this file by hand. --}
{INVENTORY}
{VERSION_INFO}
   {SAVED_WITH}11.2.0.4.0{/SAVED_WITH}
   {MINIMUM_VER}2.1.0.6.0{/MINIMUM_VER}
{/VERSION_INFO}
{HOME_LIST}
{HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="1" CRS="true"}
   {NODE_LIST}
      {NODE NAME="ol7-183-rac1"/}
      {NODE NAME="ol7-183-rac2"/}
   {/NODE_LIST}
{/HOME}
{HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0.4/dbhome_1" TYPE="O" IDX="2"}
   {NODE_LIST}
      {NODE NAME="ol7-183-rac1"/}
      {NODE NAME="ol7-183-rac2"/}
   {/NODE_LIST}
{/HOME}
{/HOME_LIST}
{COMPOSITEHOME_LIST}
{/COMPOSITEHOME_LIST}
{/INVENTORY}
[oracle@ol7-183-rac1 ContentsXML]$


================================================================================
### cluvfy comp healthcheck
================================================================================

[oracle@ol7-183-rac1 cvu]$ . oraenv <<< +ASM1
ORACLE_SID = [cdbrac1] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-183-rac1 ~]$ cluvfy comp software

Verification of Health Check was unsuccessful.
Checks did not pass for the following nodes:
        ol7-183-rac2,ol7-183-rac1


Failures were encountered during execution of CVU verification request "Health Check".

Verifying Physical Memory ...FAILED
ol7-183-rac2: PRVF-7530 : Sufficient physical memory is not available on node
              "ol7-183-rac2" [Required physical memory = 8GB (8388608.0KB)]

ol7-183-rac1: PRVF-7530 : Sufficient physical memory is not available on node
              "ol7-183-rac1" [Required physical memory = 8GB (8388608.0KB)]

Verifying Ethernet Jumbo Frames ...FAILED
ol7-183-rac2: PRVE-0293 : Jumbo Frames are not configured for interconnects
              "eth2" on node "ol7-183-rac2.localdomain". [Expected="eth2=9000";
              Found="eth2=1500"]

ol7-183-rac1: PRVE-0293 : Jumbo Frames are not configured for interconnects
              "eth2" on node "ol7-183-rac1.localdomain". [Expected="eth2=9000";
              Found="eth2=1500"]


CVU operation performed:      Health Check
Date:                         Mar 26, 2020 6:07:08 PM
CVU home:                     /u01/app/18.0.0/grid/
User:                         oracle
[oracle@ol7-183-rac1 cvu]$

March 4, 2020

Mining gridSetupActions Log

Filed under: 19c,Grid Infrastructure,RAC,upgrade — mdinh @ 4:22 am

After completing GI upgrade, what’s the most efficient way to mine results?

Upgrade GI to 19.6: typical information provided from terminal

[oracle@ol7-122-rac1 ~]$ /u01/app/19.6.0.0/grid/gridSetup.sh -applyRU /u01/app/oracle/patch/30501910

Preparing the home to patch...

Applying the patch /u01/app/oracle/patch/30501910...
Successfully applied the patch.

The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2020-03-04_00-24-53AM/installerPatchActions_2020-03-04_00-24-53AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/19.6.0.0/grid/install/response/grid_2020-03-04_00-24-53AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2020-03-04_00-24-53AM/gridSetupActions2020-03-04_00-24-53AM.log

[oracle@ol7-122-rac1 ~]$

Example response file from 12.2 install:

[oracle@ol7-122-rac1 response]$ pwd
/u01/app/12.2.0.1/grid/install/response

[oracle@ol7-122-rac1 response]$ sdiff -iEZbWBs -w 150 gridsetup.rsp grid_*.rsp
INVENTORY_LOCATION=                                                       |     INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |     oracle.install.option=CRS_CONFIG
ORACLE_BASE=                                                              |     ORACLE_BASE=/u01/app/oracle
oracle.install.asm.OSDBA=                                                 |     oracle.install.asm.OSDBA=dba
oracle.install.asm.OSASM=                                                 |     oracle.install.asm.OSASM=dba
oracle.install.crs.config.gpnp.scanName=                                  |     oracle.install.crs.config.gpnp.scanName=ol7-122-scan
oracle.install.crs.config.gpnp.scanPort=                                  |     oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterConfiguration=                           |     oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |     oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=                                    |     oracle.install.crs.config.clusterName=ol7-122-cluster
oracle.install.crs.config.gpnp.configureGNS=                              |     oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |     oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=                                 |     oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=                                   |     oracle.install.crs.config.clusterNodes=ol7-122-rac1.localdomain:ol7-12
oracle.install.crs.config.networkInterfaceList=                           |     oracle.install.crs.config.networkInterfaceList=eth1:192.168.56.0:1,eth
oracle.install.asm.configureGIMRDataDG=                                   |     oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.useIPMI=                                        |     oracle.install.crs.config.useIPMI=false
oracle.install.asm.storageOption=                                         |     oracle.install.asm.storageOption=ASM
oracle.install.asmOnNAS.configureGIMRDataDG=                              |     oracle.install.asmOnNAS.configureGIMRDataDG=false
oracle.install.asm.diskGroup.name=                                        |     oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.redundancy=                                  |     oracle.install.asm.diskGroup.redundancy=EXTERNAL
oracle.install.asm.diskGroup.AUSize=                                      |     oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.disksWithFailureGroupNames=                  |     oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/oracleasm
oracle.install.asm.diskGroup.disks=                                       |     oracle.install.asm.diskGroup.disks=/dev/oracleasm/asm-disk3,/dev/oracl
oracle.install.asm.diskGroup.diskDiscoveryString=                         |     oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/*
oracle.install.asm.gimrDG.AUSize=                                         |     oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=                                          |     oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |     oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |     oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |     oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=                                            |     oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=                          |     oracle.install.crs.rootconfig.executeRootScript=false
[oracle@ol7-122-rac1 response]$

Review response file: compare original response file versus the one used for upgrade (grid_2020-03-04_00-24-53AM.rsp)

[oracle@ol7-122-rac1 response]$ pwd
/u01/app/19.6.0.0/grid/install/response

[oracle@ol7-122-rac1 response]$ ls -l
total 76
-rw-r--r--. 1 oracle oinstall 36450 Mar  4 00:38 grid_2020-03-04_00-24-53AM.rsp
-rw-r-----. 1 oracle oinstall 36221 Jan 19  2019 gridsetup.rsp
-rw-r-----. 1 oracle oinstall  1541 May 21  2016 sample.ccf

[oracle@ol7-122-rac1 response]$ sdiff -iEZbWBs -w 150 gridsetup.rsp grid_*.rsp
INVENTORY_LOCATION=                                                       |     INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |     oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |     ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |     oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |     oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |     oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=                                    |     oracle.install.crs.config.clusterName=ol7-122-cluster
oracle.install.crs.config.gpnp.configureGNS=                              |     oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |     oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=                                 |     oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=                                   |     oracle.install.crs.config.clusterNodes=ol7-122-rac2:,ol7-122-rac1:
oracle.install.crs.configureGIMR=                                         |     oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=                                   |     oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=                                  |     oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.useIPMI=                                        |     oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=                                        |     oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.AUSize=                                      |     oracle.install.asm.diskGroup.AUSize=0
oracle.install.asm.gimrDG.AUSize=                                         |     oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=                                          |     oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |     oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |     oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |     oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=                                            |     oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=                          |     oracle.install.crs.rootconfig.executeRootScript=false
[oracle@ol7-122-rac1 response]$

Review log directory:

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ pwd
/u01/app/oraInventory/logs/GridSetupActions2020-03-04_00-24-53AM

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ ls -alrt
total 17988
-rw-r-----.  1 oracle oinstall   11578 Mar  4 00:31 installerPatchActions_2020-03-04_00-24-53AM.log
-rw-r-----.  1 oracle oinstall       0 Mar  4 00:31 gridSetupActions2020-03-04_00-24-53AM.err
drwxrwx---.  3 oracle oinstall      21 Mar  4 00:31 temp_ob
-rw-r-----.  1 oracle oinstall       0 Mar  4 00:38 oraInstall2020-03-04_00-24-53AM.err
-rw-r-----.  1 oracle oinstall     157 Mar  4 00:38 oraInstall2020-03-04_00-24-53AM.out
-rw-r-----.  1 oracle oinstall 9728749 Mar  4 00:39 gridSetupActions2020-03-04_00-24-53AM.out
-rw-r-----.  1 oracle oinstall       0 Mar  4 00:44 oraInstall2020-03-04_00-24-53AM.err.ol7-122-rac2
-rw-r-----.  1 oracle oinstall     142 Mar  4 00:44 oraInstall2020-03-04_00-24-53AM.out.ol7-122-rac2
-rw-r-----.  1 oracle oinstall   29328 Mar  4 02:05 time2020-03-04_00-24-53AM.log
-rw-r-----.  1 oracle oinstall 8624226 Mar  4 02:05 gridSetupActions2020-03-04_00-24-53AM.log
drwxrwx---. 12 oracle oinstall    4096 Mar  4 02:18 ..
drwxrwx---.  3 oracle oinstall    4096 Mar  4 03:20 .

Review .err file: 0 byte is good

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ ls -l *.err
-rw-r-----. 1 oracle oinstall 0 Mar  4 00:31 gridSetupActions2020-03-04_00-24-53AM.err
-rw-r-----. 1 oracle oinstall 0 Mar  4 00:38 oraInstall2020-03-04_00-24-53AM.err

Review grid action: for verification purpose grep log for when grid was configure vs upgrade for comparison

[oracle@ol7-122-rac1 GridSetupActions2020-03-03_01-26-02AM]$ grep -i getInstallOption gridSetupActions*.log
INFO:  [Mar 3, 2020 1:26:05 AM] getInstallOption: CRS_CONFIG
[oracle@ol7-122-rac1 GridSetupActions2020-03-03_01-26-02AM]$

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ grep -i getInstallOption gridSetupActions*.log
INFO:  [Mar 4, 2020 12:32:07 AM] getInstallOption: UPGRADE
[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$

Check for distinct keywords:

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ grep -e '[[:upper:]]: ' gridSetupActions*.log | cut -d ":" -f1 | sort -u
   ACTION
          APPLICATION_ERROR
   CAUSE
INFO
Output
TaskUsersWithSameID
WARNING

Check APPLICATION_ERROR:

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ grep -B3 -A1 APPLICATION_ERROR gridSetupActions*.log
INFO:  [Mar 4, 2020 12:35:27 AM] INFO: [Task.perform:873]
TaskCheckRPMPackageManager:RPM Package Manager database[TASKCHECKRPMPACKAGEMANAGER]:TASK_SUMMARY:FAILED:INFORMATION:INFORMATION:Total time taken []
          ERRORMSG(GLOBAL): PRVG-11250 : The check "RPM Package Manager database" was not performed because it needs 'root' user privileges.
          APPLICATION_ERROR: NodeResultsUnavailableException thrown when hasNodeResults() returns true
INFO:  [Mar 4, 2020 12:35:27 AM] INFO: [Task.perform:799]

Did you noticed that I used wildcard for the search?

It does not matter since the log for each task will typically be in different directories.

This is the one thing I noticed Oracle did correctly as it’s much easier to the same commands for any environments.

March 1, 2020

Upgrade Grid 12.2 to 19.6 Using Gold Image

Filed under: 19c,Grid Infrastructure,upgrade — mdinh @ 10:00 pm

Quick and dirty OPatch Update for All nodes:

[oracle@ol7-122-rac1 JAN2019]$ echo $ORACLE_HOME
/u01/app/12.2.0.1/grid
[oracle@ol7-122-rac1 JAN2019]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.6

OPatch succeeded.
[oracle@ol7-122-rac1 JAN2019]$ rm -rf $ORACLE_HOME/OPatch/*
[oracle@ol7-122-rac1 JAN2019]$ unzip -qo p6880880_122010_Linux-x86-64.zip -d $ORACLE_HOME
[oracle@ol7-122-rac1 JAN2019]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.19

OPatch succeeded.
[oracle@ol7-122-rac1 JAN2019]$

------------------------------

[oracle@ol7-122-rac2 JAN2019]$ echo $ORACLE_HOME
/u01/app/12.2.0.1/grid
[oracle@ol7-122-rac2 JAN2019]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.6

OPatch succeeded.
[oracle@ol7-122-rac2 JAN2019]$ rm -rf $ORACLE_HOME/OPatch/*
[oracle@ol7-122-rac2 JAN2019]$ unzip -qo p6880880_122010_Linux-x86-64.zip -d $ORACLE_HOME
[oracle@ol7-122-rac2 JAN2019]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.19

OPatch succeeded.
[oracle@ol7-122-rac2 JAN2019]$

Create Grid 19.6 directory for All nodes:

[root@ol7-122-rac1 ~]# mkdir -p /u01/app/19.6.0.0/grid
[root@ol7-122-rac1 ~]# chown oracle:oinstall /u01/app/19.6.0.0/grid
[root@ol7-122-rac1 ~]# chmod 775 /u01/app/19.6.0.0/grid

------------------------------

[root@ol7-122-rac2 ~]# mkdir -p /u01/app/19.6.0.0/grid
[root@ol7-122-rac2 ~]# chown oracle:oinstall /u01/app/19.6.0.0/grid
[root@ol7-122-rac2 ~]# chmod 775 /u01/app/19.6.0.0/grid

Verify required Grid 12.2 patch for All nodes:

[oracle@ol7-122-rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches -oh /u01/app/12.2.0.1/grid
28553832;OCW Interim patch for 28553832

OPatch succeeded.
[oracle@ol7-122-rac1 ~]$

------------------------------

[oracle@ol7-122-rac2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches -oh /u01/app/12.2.0.1/grid
28553832;OCW Interim patch for 28553832

OPatch succeeded.
[oracle@ol7-122-rac2 ~]$

Unzip Grid 19.6 Gold Image for First node:

[oracle@ol7-122-rac1 ~]$ time unzip -qo /vagrant_software/LINUX.X64_19600_grid_home.zip -d /u01/app/19.6.0.0/grid; echo $?

real    4m56.824s
user    0m24.313s
sys     0m53.903s
0

[oracle@ol7-122-rac1 ~]$ ls /u01/app/19.6.0.0/grid
acfs        cha          dmu            javavm                          ologgerd       plsql          root.sh.old.3   utl
acfsccm     client       env.ora        jdbc                            OPatch         precomp        rootupgrade.sh  welcome.html
acfsccreg   clone        evm            jdk                             opatchautocfg  QOpatch        runcluvfy.sh    wlm
acfscm      crs          gipc           jlib                            opmn           qos            sdk             wwg
acfsiob     css          gnsd           ldap                            oracore        racg           slax            xag
acfsrd      ctss         gpnp           lib                             ord            rdbms          sqlpatch        xdk
acfsrm      cv           gridSetup.sh   LINUX.X64_193000_grid_home.zip  ords           relnotes       sqlplus
addnode     dbjava       has            md                              oss            rhp            srvm
advmccb     dbs          hs             mdns                            osysmond       root.sh        suptools
assistants  deinstall    install        network                         oui            root.sh.old    tomcat
bin         demo         instantclient  nls                             owm            root.sh.old.1  ucp
cdp         diagnostics  inventory      ohasd                           perl           root.sh.old.2  usm

[oracle@ol7-122-rac1 ~]$ du -sh /u01/app/19.6.0.0/grid
9.4G    /u01/app/19.6.0.0/grid
[oracle@ol7-122-rac1 ~]$

[root@ol7-122-rac1 ~]# /u01/app/19.6.0.0/grid/rootupgrade.sh

Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.6.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.6.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/ol7-122-rac1/crsconfig/rootcrs_ol7-122-rac1_2020-03-01_05-05-31PM.log
2020/03/01 17:05:49 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2020/03/01 17:05:49 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/03/01 17:05:49 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2020/03/01 17:05:54 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2020/03/01 17:05:54 CLSRSC-464: Starting retrieval of the cluster configuration data
2020/03/01 17:09:07 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2020/03/01 17:09:36 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.
2020/03/01 17:11:40 CLSRSC-693: CRS entities validation completed successfully.
2020/03/01 17:11:44 CLSRSC-515: Starting OCR manual backup.
2020/03/01 17:11:51 CLSRSC-516: OCR manual backup successful.
2020/03/01 17:11:58 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2020/03/01 17:11:58 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2020/03/01 17:11:58 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2020/03/01 17:12:04 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2020/03/01 17:12:04 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2020/03/01 17:12:05 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2020/03/01 17:12:08 CLSRSC-363: User ignored prerequisites during installation
2020/03/01 17:12:17 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2020/03/01 17:12:17 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.
2020/03/01 17:17:16 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2020/03/01 17:17:16 CLSRSC-482: Running command: '/u01/app/12.2.0.1/grid/bin/crsctl start rollingupgrade 19.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2020/03/01 17:17:20 CLSRSC-482: Running command: '/u01/app/19.6.0.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.2.0.1/grid -oldCRSVersion 12.2.0.1.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2020/03/01 17:18:22 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2020/03/01 17:18:26 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2020/03/01 17:19:10 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2020/03/01 17:19:12 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2020/03/01 17:19:13 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2020/03/01 17:19:25 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2020/03/01 17:19:25 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2020/03/01 17:19:32 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2020/03/01 17:19:38 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2020/03/01 17:19:38 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2020/03/01 17:21:09 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2020/03/01 17:23:09 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2020/03/01 17:23:15 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2020/03/01 17:26:57 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2020/03/01 17:27:15 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
2020/03/01 17:27:18 CLSRSC-474: Initiating upgrade of resource types
2020/03/01 17:33:51 CLSRSC-475: Upgrade of resource types successfully initiated.
2020/03/01 17:34:01 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
2020/03/01 17:34:08 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@ol7-122-rac1 ~]#

[root@ol7-122-rac2 ~]# /u01/app/19.6.0.0/grid/rootupgrade.sh

Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.6.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.6.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/ol7-122-rac2/crsconfig/rootcrs_ol7-122-rac2_2020-03-01_05-39-49PM.log
2020/03/01 17:39:57 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2020/03/01 17:39:57 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/03/01 17:39:57 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2020/03/01 17:39:58 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2020/03/01 17:39:58 CLSRSC-464: Starting retrieval of the cluster configuration data
2020/03/01 17:40:12 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2020/03/01 17:40:12 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2020/03/01 17:40:12 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2020/03/01 17:40:13 CLSRSC-363: User ignored prerequisites during installation
2020/03/01 17:40:14 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2020/03/01 17:40:14 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.

ASM configuration upgraded in local node successfully.

2020/03/01 17:41:21 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2020/03/01 17:43:07 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2020/03/01 17:47:30 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2020/03/01 17:47:32 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2020/03/01 17:47:32 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2020/03/01 17:47:40 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2020/03/01 17:47:40 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2020/03/01 17:47:42 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2020/03/01 17:47:43 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2020/03/01 17:47:43 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2020/03/01 17:49:01 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2020/03/01 17:50:42 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2020/03/01 17:50:44 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2020/03/01 17:51:35 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 19 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2020/03/01 17:52:29 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
Start upgrade invoked..
2020/03/01 17:52:33 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2020/03/01 17:52:33 CLSRSC-482: Running command: '/u01/app/19.6.0.0/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Started to upgrade Oracle ACFS.
Oracle ACFS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 19.0.0.0.0.
2020/03/01 17:53:42 CLSRSC-479: Successfully set Oracle Clusterware active version
2020/03/01 17:53:42 CLSRSC-476: Finishing upgrade of resource types
2020/03/01 17:53:49 CLSRSC-477: Successfully completed upgrade of resource types
2020/03/01 17:57:54 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
Successfully updated XAG resources.
2020/03/01 17:58:37 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@ol7-122-rac2 ~]#

Check $ORACLE_HOME/.patch_storage $ORACLE_HOME/log for new GI 19.6

[oracle@ol7-122-rac2 ~]$ ls -l $ORACLE_HOME/.patch_storage $ORACLE_HOME/log
ls: cannot access /u01/app/19.6.0.0/grid/.patch_storage: No such file or directory
/u01/app/19.6.0.0/grid/log:
total 4
drwxr-x---.  4 oracle oinstall   57 Mar  1 17:51 diag
drwxr-xr-t. 20 root   oinstall 4096 Mar  1 17:39 ol7-122-rac2
[oracle@ol7-122-rac2 ~]$

Is Peace Of Mind Better Than Best Practice

Filed under: 19c,Grid Infrastructure,upgrade — mdinh @ 3:18 am

There’s a discussion on twitter about nasty bug with GI upgrade to 19.6.

It’s unclear if gridSetup.sh -applyRU is being used with leads to BUG.

Truthfully, I like the concept of gridSetup.sh -applyRU; however, I am often reminded of manager who used to coach me, “Slow and steady wins the race.”

With that being said, I suggested that it may be better and simpler to complete upgrade first and then patch vs upgrade and patch at the same time.

Then I am asked, “So the best practice should be install the base one first and patch after?”

What’s the price for Peace Of Mind?

Out of curiosity, I was able to upgrade GI to 19.6 from 12.2 with upgrade first and then patch.

I am not going to explain the process but here are the relevant terminal outputs. gridSetup.sh was performed using GUI – I was lazy.

==================================================

[oracle@ol7-122-rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches -oh /u01/app/12.2.0.1/grid
28553832;OCW Interim patch for 28553832

OPatch succeeded.
[oracle@ol7-122-rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches -oh /u01/app/oracle/product/12.2.0.1/dbhome_1
28553832;OCW Interim patch for 28553832

OPatch succeeded.
[oracle@ol7-122-rac1 ~]$

--------------------------------------------------

[oracle@ol7-122-rac2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches -oh /u01/app/12.2.0.1/grid
28553832;OCW Interim patch for 28553832

OPatch succeeded.
[oracle@ol7-122-rac2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches -oh /u01/app/oracle/product/12.2.0.1/dbhome_1
28553832;OCW Interim patch for 28553832

OPatch succeeded.
[oracle@ol7-122-rac2 ~]$

==================================================

[oracle@ol7-122-rac1 ~]$ /u01/app/19.3.0.0/grid/runcluvfy.sh stage -pre crsinst -upgrade -rolling \
> -src_crshome /u01/app/12.2.0.1/grid -dest_crshome /u01/app/19.3.0.0/grid \
> -dest_version 19.0.0.0.0 -fixup -verbose

Pre-check for cluster services setup was unsuccessful.
Checks did not pass for the following nodes:
        ol7-122-rac2,ol7-122-rac1


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Physical Memory ...FAILED
ol7-122-rac2: PRVF-7530 : Sufficient physical memory is not available on node
              "ol7-122-rac2" [Required physical memory = 8GB (8388608.0KB)]

ol7-122-rac1: PRVF-7530 : Sufficient physical memory is not available on node
              "ol7-122-rac1" [Required physical memory = 8GB (8388608.0KB)]

Verifying ACFS Driver Checks ...FAILED
PRVG-6096 : Oracle ACFS driver is not supported on the current operating system
version for Oracle Clusterware release version "19.0.0.0.0".

Verifying RPM Package Manager database ...INFORMATION
PRVG-11250 : The check "RPM Package Manager database" was not performed because
it needs 'root' user privileges.


CVU operation performed:      stage -pre crsinst
Date:                         Feb 29, 2020 5:49:54 PM
CVU home:                     /u01/app/19.3.0.0/grid/
User:                         oracle
[oracle@ol7-122-rac1 ~]$

==================================================

[root@ol7-122-rac1 ~]# /u01/app/19.3.0.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/19.3.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/ol7-122-rac1/crsconfig/rootcrs_ol7-122-rac1_2020-02-29_06-32-37PM.log
2020/02/29 18:33:02 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2020/02/29 18:33:02 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/02/29 18:33:02 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2020/02/29 18:33:09 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2020/02/29 18:33:09 CLSRSC-464: Starting retrieval of the cluster configuration data
2020/02/29 18:33:21 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.
2020/02/29 18:35:26 CLSRSC-693: CRS entities validation completed successfully.
2020/02/29 18:35:33 CLSRSC-515: Starting OCR manual backup.
2020/02/29 18:35:46 CLSRSC-516: OCR manual backup successful.
2020/02/29 18:36:30 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2020/02/29 18:39:40 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2020/02/29 18:39:41 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2020/02/29 18:39:42 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2020/02/29 18:40:03 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2020/02/29 18:40:04 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2020/02/29 18:40:07 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2020/02/29 18:40:14 CLSRSC-363: User ignored prerequisites during installation
2020/02/29 18:40:33 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2020/02/29 18:40:33 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.
2020/02/29 18:46:04 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2020/02/29 18:46:04 CLSRSC-482: Running command: '/u01/app/12.2.0.1/grid/bin/crsctl start rollingupgrade 19.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2020/02/29 18:46:10 CLSRSC-482: Running command: '/u01/app/19.3.0.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.2.0.1/grid -oldCRSVersion 12.2.0.1.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2020/02/29 18:46:19 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2020/02/29 18:46:29 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2020/02/29 18:47:15 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2020/02/29 18:47:18 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2020/02/29 18:47:21 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2020/02/29 18:47:32 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2020/02/29 18:47:32 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2020/02/29 18:47:42 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2020/02/29 18:47:51 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2020/02/29 18:47:51 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2020/02/29 18:48:31 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2020/02/29 18:48:46 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2020/02/29 18:48:54 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2020/02/29 18:50:44 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2020/02/29 18:51:33 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
2020/02/29 18:51:38 CLSRSC-474: Initiating upgrade of resource types
2020/02/29 18:52:58 CLSRSC-475: Upgrade of resource types successfully initiated.
2020/02/29 18:53:13 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
2020/02/29 18:53:22 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@ol7-122-rac1 ~]#

--------------------------------------------------

[root@ol7-122-rac2 ~]# /u01/app/19.3.0.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.3.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/ol7-122-rac2/crsconfig/rootcrs_ol7-122-rac2_2020-02-29_06-57-24PM.log
2020/02/29 18:57:34 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2020/02/29 18:57:34 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/02/29 18:57:34 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2020/02/29 18:57:36 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2020/02/29 18:57:36 CLSRSC-464: Starting retrieval of the cluster configuration data
2020/02/29 18:57:49 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2020/02/29 18:57:49 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2020/02/29 18:57:49 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2020/02/29 18:57:51 CLSRSC-363: User ignored prerequisites during installation
2020/02/29 18:57:52 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2020/02/29 18:57:53 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.

ASM configuration upgraded in local node successfully.

2020/02/29 18:58:01 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2020/02/29 18:58:32 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2020/02/29 18:58:34 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2020/02/29 18:58:38 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2020/02/29 18:58:42 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2020/02/29 18:58:43 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2020/02/29 18:58:44 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2020/02/29 18:58:46 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2020/02/29 18:58:46 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2020/02/29 18:59:12 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2020/02/29 18:59:20 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2020/02/29 18:59:21 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2020/02/29 19:00:42 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2020/02/29 19:01:12 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 19 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2020/02/29 19:01:45 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
Start upgrade invoked..
2020/02/29 19:01:53 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2020/02/29 19:01:53 CLSRSC-482: Running command: '/u01/app/19.3.0.0/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Started to upgrade Oracle ACFS.
Oracle ACFS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 19.0.0.0.0.
2020/02/29 19:03:04 CLSRSC-479: Successfully set Oracle Clusterware active version
2020/02/29 19:03:05 CLSRSC-476: Finishing upgrade of resource types
2020/02/29 19:03:17 CLSRSC-477: Successfully completed upgrade of resource types
2020/02/29 19:03:50 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
Successfully updated XAG resources.
2020/02/29 19:04:19 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@ol7-122-rac2 ~]#

==================================================

[oracle@ol7-122-rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches -oh /u01/app/19.3.0.0/grid
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.
[oracle@ol7-122-rac1 ~]$

--------------------------------------------------

[oracle@ol7-122-rac2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches -oh /u01/app/19.3.0.0/grid
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.
[oracle@ol7-122-rac2 ~]$

==================================================

[oracle@ol7-122-rac1 ~]$ cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Feb 29, 2020 7:39:23 PM
CVU home:                     /u01/app/19.3.0.0/grid/
User:                         oracle
[oracle@ol7-122-rac1 ~]$

==================================================

[oracle@ol7-122-rac1 ~]$ crsctl check cluster -all
**************************************************************
ol7-122-rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
ol7-122-rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@ol7-122-rac1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [ol7-122-rac1] is [19.0.0.0.0]

[oracle@ol7-122-rac1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node ol7-122-rac1 is [724960844].

[oracle@ol7-122-rac1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [19.0.0.0.0]

[oracle@ol7-122-rac1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [724960844] and the complete list of patches [29401763 29517242 29517247 29585399 ] have been applied on the local node. The release patch string is [19.3.0.0.0].

[oracle@ol7-122-rac1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [724960844].
[oracle@ol7-122-rac1 ~]$

--------------------------------------------------

[oracle@ol7-122-rac2 ~]$ crsctl check cluster -all
**************************************************************
ol7-122-rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
ol7-122-rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@ol7-122-rac2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [ol7-122-rac2] is [19.0.0.0.0]

[oracle@ol7-122-rac2 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node ol7-122-rac2 is [724960844].

[oracle@ol7-122-rac2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [19.0.0.0.0]

[oracle@ol7-122-rac2 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [724960844] and the complete list of patches [29401763 29517242 29517247 29585399 ] have been applied on the local node. The release patch string is [19.3.0.0.0].

[oracle@ol7-122-rac2 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [724960844] and the complete list of patches [29401763 29517242 29517247 29585399 ] have been applied on the local node. The release patch string is [19.3.0.0.0].
[oracle@ol7-122-rac2 ~]$

==================================================

[oracle@ol7-122-rac1 ~]$ echo $ORACLE_HOME
/u01/app/19.3.0.0/grid
[oracle@ol7-122-rac1 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
[oracle@ol7-122-rac1 ~]$ rm -rf $ORACLE_HOME/OPatch/*
[oracle@ol7-122-rac1 ~]$ unzip -qo /u01/app/oracle/patch/p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
[oracle@ol7-122-rac1 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.19

OPatch succeeded.
[oracle@ol7-122-rac1 ~]$

--------------------------------------------------

[oracle@ol7-122-rac2 ~]$ echo $ORACLE_HOME
/u01/app/19.3.0.0/grid
[oracle@ol7-122-rac2 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
[oracle@ol7-122-rac2 ~]$ rm -rf $ORACLE_HOME/OPatch/*
[oracle@ol7-122-rac2 ~]$ unzip -qo /u01/app/oracle/patch/p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
[oracle@ol7-122-rac2 ~]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.19

OPatch succeeded.
[oracle@ol7-122-rac2 ~]$

==================================================

[root@ol7-122-rac1 ~]# $ORACLE_HOME/OPatch/opatchauto apply /u01/app/oracle/patch/30501910

OPatchauto session is initiated at Sat Feb 29 20:04:21 2020

System initialization log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-29_08-04-24PM.log.

Session log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-02-29_08-04-50PM.log
The id for this session is TMIS

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0.0/grid
Patch applicability verified successfully on home /u01/app/19.3.0.0/grid


Bringing down CRS service on home /u01/app/19.3.0.0/grid
Prepatch operation log file location: /u01/app/oracle/crsdata/ol7-122-rac1/crsconfig/crspatch_ol7-122-rac1_2020-02-29_05-04-37PM.log
CRS service brought down successfully on home /u01/app/19.3.0.0/grid


Start applying binary patch on home /u01/app/19.3.0.0/grid
Binary patch applied successfully on home /u01/app/19.3.0.0/grid


Starting CRS service on home /u01/app/19.3.0.0/grid
Postpatch operation log file location: /u01/app/oracle/crsdata/ol7-122-rac1/crsconfig/crspatch_ol7-122-rac1_2020-02-29_05-04-37PM.log
CRS service started successfully on home /u01/app/19.3.0.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:ol7-122-rac1
CRS Home:/u01/app/19.3.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/app/oracle/patch/30501910/30489227
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-29_20-09-33PM_1.log

Patch: /u01/app/oracle/patch/30501910/30489632
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-29_20-09-33PM_1.log

Patch: /u01/app/oracle/patch/30501910/30557433
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-29_20-09-33PM_1.log

Patch: /u01/app/oracle/patch/30501910/30655595
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-29_20-09-33PM_1.log



Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/product/12.2.0.1/dbhome_1



OPatchauto session completed at Sat Feb 29 20:22:52 2020
Time taken to complete the session 18 minutes, 31 seconds
[root@ol7-122-rac1 ~]#

--------------------------------------------------

[root@ol7-122-rac2 ~]# $ORACLE_HOME/OPatch/opatchauto apply /u01/app/oracle/patch/30501910

OPatchauto session is initiated at Sat Feb 29 20:24:25 2020

System initialization log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-29_08-24-28PM.log.

Session log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-02-29_08-24-53PM.log
The id for this session is X5R1

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0.0/grid
Patch applicability verified successfully on home /u01/app/19.3.0.0/grid


Bringing down CRS service on home /u01/app/19.3.0.0/grid
Prepatch operation log file location: /u01/app/oracle/crsdata/ol7-122-rac2/crsconfig/crspatch_ol7-122-rac2_2020-02-29_05-32-25PM.log
CRS service brought down successfully on home /u01/app/19.3.0.0/grid


Start applying binary patch on home /u01/app/19.3.0.0/grid
Binary patch applied successfully on home /u01/app/19.3.0.0/grid


Starting CRS service on home /u01/app/19.3.0.0/grid

Postpatch operation log file location: /u01/app/oracle/crsdata/ol7-122-rac2/crsconfig/crspatch_ol7-122-rac2_2020-02-29_05-32-25PM.log
CRS service started successfully on home /u01/app/19.3.0.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:ol7-122-rac2
CRS Home:/u01/app/19.3.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/app/oracle/patch/30501910/30489227
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-29_20-30-45PM_1.log

Patch: /u01/app/oracle/patch/30501910/30489632
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-29_20-30-45PM_1.log

Patch: /u01/app/oracle/patch/30501910/30557433
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-29_20-30-45PM_1.log

Patch: /u01/app/oracle/patch/30501910/30655595
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-29_20-30-45PM_1.log



Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/product/12.2.0.1/dbhome_1



OPatchauto session completed at Sat Feb 29 20:54:46 2020
Time taken to complete the session 30 minutes, 21 seconds
[root@ol7-122-rac2 ~]#

==================================================

[oracle@ol7-122-rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches -oh /u01/app/19.3.0.0/grid
30655595;TOMCAT RELEASE UPDATE 19.0.0.0.0 (30655595)
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489632;ACFS RELEASE UPDATE 19.6.0.0.0 (30489632)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)

OPatch succeeded.
[oracle@ol7-122-rac1 ~]$

--------------------------------------------------

[oracle@ol7-122-rac2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches -oh /u01/app/19.3.0.0/grid
30655595;TOMCAT RELEASE UPDATE 19.0.0.0.0 (30655595)
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489632;ACFS RELEASE UPDATE 19.6.0.0.0 (30489632)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)

OPatch succeeded.
[oracle@ol7-122-rac2 ~]$

==================================================

[oracle@ol7-122-rac1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2701864972] and the complete list of patches [30489227 30489632 30557433 30655595 ] have been applied on the local node. The release patch string is [19.6.0.0.0].
[oracle@ol7-122-rac1 ~]$

--------------------------------------------------

Oracle Clusterware release patch level is [2701864972] and the complete list of patches [30489227 30489632 30557433 30655595 ] have been applied on the local node. The release patch string is [19.6.0.0.0].
[oracle@ol7-122-rac2 ~]$

August 6, 2019

19c Grid Dry-Run Upgrade

Filed under: 19c,awk_sed_grep,Grid Infrastructure,upgrade — mdinh @ 12:42 pm

First test using GUI.

[oracle@racnode-dc2-1 grid]$ /u01/app/19.3.0.0/grid/gridSetup.sh -dryRunForUpgrade
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_00-20-31AM/gridSetupActions2019-08-06_00-20-31AM.log
[oracle@racnode-dc2-1 grid]$

Create dryRunForUpgradegrid.rsp from grid_2019-08-06_00-20-31AM.rsp (above GUI test)

[oracle@racnode-dc2-1 grid]$ grep -v "^#" /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp | grep -v "=$" | awk 'NF' > /home/oracle/dryRunForUpgradegrid.rsp

[oracle@racnode-dc2-1 ~]$ cat /home/oracle/dryRunForUpgradegrid.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=vbox-rac-dc2
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=racnode-dc2-1:,racnode-dc2-2:
oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=CRS
oracle.install.asm.diskGroup.AUSize=0
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=false
[oracle@racnode-dc2-1 ~]$

Create directory grid home for all nodes:

[root@racnode-dc2-1 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54318(asmdba),54322(dba),54323(backupdba),54324(oper),54325(dgdba),54326(kmdba)

[root@racnode-dc2-1 ~]# mkdir -p /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chown oracle:oinstall /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chmod 775 /u01/app/19.3.0.0/grid

[root@racnode-dc2-1 ~]# ll /u01/app/19.3.0.0/
total 4
drwxrwxr-x 2 oracle oinstall 4096 Aug  6 02:07 grid
[root@racnode-dc2-1 ~]#

Extract grid software for node1 ONLY:

[oracle@racnode-dc2-1 ~]$ unzip -qo /media/swrepo/LINUX.X64_193000_grid_home.zip -d /u01/app/19.3.0.0/grid/

[oracle@racnode-dc2-1 ~]$ ls /u01/app/19.3.0.0/grid/
addnode     clone  dbjava     diagnostics  gpnp          install        jdbc  lib      OPatch   ords  perl     qos       rhp            rootupgrade.sh  sqlpatch  tomcat  welcome.html  xdk
assistants  crs    dbs        dmu          gridSetup.sh  instantclient  jdk   md       opmn     oss   plsql    racg      root.sh        runcluvfy.sh    sqlplus   ucp     wlm
bin         css    deinstall  env.ora      has           inventory      jlib  network  oracore  oui   precomp  rdbms     root.sh.old    sdk             srvm      usm     wwg
cha         cv     demo       evm          hs            javavm         ldap  nls      ord      owm   QOpatch  relnotes  root.sh.old.1  slax            suptools  utl     xag

[oracle@racnode-dc2-1 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.0G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-1 ~]$

Run gridSetup.sh -silent -dryRunForUpgrade:

[oracle@racnode-dc2-1 ~]$ env|grep -i ora
USER=oracle
MAIL=/var/spool/mail/oracle
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
HOME=/home/oracle
LOGNAME=oracle

[oracle@racnode-dc2-1 ~]$ date
Tue Aug  6 02:35:47 CEST 2019

[oracle@racnode-dc2-1 ~]$ /u01/app/19.3.0.0/grid/gridSetup.sh -silent -dryRunForUpgrade -responseFile /home/oracle/dryRunForUpgradegrid.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_02-35-52AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log


As a root user, execute the following script(s):
        1. /u01/app/19.3.0.0/grid/rootupgrade.sh

Execute /u01/app/19.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc2-1]

Run the script on the local node.

Successfully Setup Software with warning(s).
[oracle@racnode-dc2-1 ~]$

Run rootupgrade.sh for node1 ONLY and review log:

[root@racnode-dc2-1 ~]# /u01/app/19.3.0.0/grid/rootupgrade.sh
Check /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log for the output of root script

[root@racnode-dc2-1 ~]# cat /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Performing Dry run of the Grid Infrastructure upgrade.
Using configuration parameter file: /u01/app/19.3.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/racnode-dc2-1/crsconfig/rootcrs_racnode-dc2-1_2019-08-06_02-45-31AM.log
2019/08/06 02:45:44 CLSRSC-464: Starting retrieval of the cluster configuration data
2019/08/06 02:45:52 CLSRSC-729: Checking whether CRS entities are ready for upgrade, cluster upgrade will not be attempted now. This operation may take a few minutes.
2019/08/06 02:47:56 CLSRSC-693: CRS entities validation completed successfully.
[root@racnode-dc2-1 ~]#

Check grid home for node2:

[oracle@racnode-dc2-2 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.6G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-2 ~]$

Check oraInventory for ALL nodes:

[oracle@racnode-dc2-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.7.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.2.0.1/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.2.0.1/db1" TYPE="O" IDX="2"/>
==========================================================================================
<HOME NAME="OraGI19Home1" LOC="/u01/app/19.3.0.0/grid" TYPE="O" IDX="3"/>
==========================================================================================
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc2-2 ~]$

Check crs activeversion: 12.2.0.1.0

[oracle@racnode-dc2-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc2-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [927320293].
[oracle@racnode-dc2-1 ~]$

Check log location:

[oracle@racnode-dc2-1 ~]$ cd /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/

[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$ ls -alrt
total 17420
-rw-r-----  1 oracle oinstall     129 Aug  6 02:35 installerPatchActions_2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall       0 Aug  6 02:35 gridSetupActions2019-08-06_02-35-52AM.err
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:35 temp_ob
-rw-r-----  1 oracle oinstall       0 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.err
drwxrwx--- 17 oracle oinstall    4096 Aug  6 02:39 ..
-rw-r-----  1 oracle oinstall     157 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall       0 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.err.racnode-dc2-2
-rw-r-----  1 oracle oinstall     142 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.out.racnode-dc2-2
-rw-r-----  1 oracle oinstall 9341920 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall   13419 Aug  6 02:43 time2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall 8443087 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.log
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:56 .
[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$

After dryRunForUpgrade, detach 19.3.0.0 grid home and remove directory (19.3.0.0/grid) from all nodes.

export ORACLE_HOME=/u01/app/19.3.0.0/grid
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$ORACLE_HOME

May 7, 2019

Remove GRID Home After Upgrade

Filed under: 12c,Grid Infrastructure,RAC — mdinh @ 9:53 pm

The environment started with a GRID 12.1.0.1 installation, upgraded to 18.3.0.0, and performed out-of-place patching (OOP) to 18.6.0.0.

As a result, there are three GRID homes and remove 12.1.0.1.

This demonstration will be for the last node from the cluster; however, the action performed will be the same for all nodes.

Review existing patch for Grid and Database homes:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/lspatches.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Notice that the GRID home is /u01/18.3.0.0/grid_2 because this was the suggestion from OOP process.
Based on experience, it might be better to name GRID home with the correct version, i.e. /u01/18.6.0.0/grid

Verify cluster state is [NORMAL]:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/crs_Query.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2056778364].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2056778364] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 29301631 29301643 29302264 ] have been applied on the local node. The release patch string is [18.6.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2056778364].
+ exit
[oracle@racnode-dc1-1 ~]$

Check Oracle Inventory:

[oracle@racnode-dc1-2 ~]$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>

### GRID home (/u01/app/12.1.0.1/grid) to be removed.
========================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
========================================================================================

<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove GRID home (/u01/app/12.1.0.1/grid). Use -local flag to avoid any bug issues.

[oracle@racnode-dc1-2 ~]$ export ORACLE_HOME=/u01/app/12.1.0.1/grid
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16040 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
[oracle@racnode-dc1-2 ~]$

Verify GRID home was removed:

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>

### GRID home (/u01/app/12.1.0.1/grid) removed.
================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1" REMOVED="T"/>
================================================================================

</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove 12.1.0.1 directory:

[oracle@racnode-dc1-2 ~]$ sudo su -
Last login: Thu May  2 23:38:22 CEST 2019
[root@racnode-dc1-2 ~]# cd /u01/app/
[root@racnode-dc1-2 app]# ll
total 12
drwxr-xr-x  3 root   oinstall 4096 Apr 17 23:36 12.1.0.1
drwxrwxr-x 12 oracle oinstall 4096 Apr 30 18:05 oracle
drwxrwx---  5 oracle oinstall 4096 May  2 23:54 oraInventory
[root@racnode-dc1-2 app]# rm -rf 12.1.0.1/
[root@racnode-dc1-2 app]#

Check the cluster:

[root@racnode-dc1-2 app]# logout
[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racnode-dc1-2 ~]$

Later, /u01/18.3.0.0/grid will be removed, too, if there are no issues with the most recent patch.

May 5, 2019

What’s My Cluster Configuration

Filed under: 18c,Grid Infrastructure,RAC — mdinh @ 2:15 pm
[grid@ol7-183-node1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[grid@ol7-183-node1 ~]$ crsctl get cluster configuration
Name                : ol7-183-cluster
Configuration       : Cluster
Class               : Standalone Cluster
Type                : flex
The cluster is not extended.
--------------------------------------------------------------------------------
        MEMBER CLUSTER INFORMATION

      Name       Version        GUID                       Deployed Deconfigured
================================================================================
================================================================================

[grid@ol7-183-node1 ~]$ olsnodes -s -a -t
ol7-183-node1   Active  Hub     Unpinned
ol7-183-node2   Active  Hub     Unpinned

[grid@ol7-183-node1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [70732493] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28090564 28256701 ] have been applied on the local node. The release patch string is [18.3.0.0.0].

[grid@ol7-183-node1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [70732493].
[grid@ol7-183-node1 ~]$

May 3, 2019

GRID Out Of Place (OOP) Rollback Disaster

Filed under: 18c,Grid Infrastructure,RAC — mdinh @ 4:45 pm

Now I understand the hesitation to use Oracle new features, especially any auto.

It may just be simpler and less stress to perform manual task having control and knowing what is being executed and validated.

GRID Out Of Place (OOP) patching completed successfully for 18.6.0.0.0.

GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1

Here is an example of inventory after patching.

+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

Run cluvfy was successful too.

[oracle@racnode-dc1-1 ~]$ cluvfy stage -post crsinst -n racnode-dc1-1,racnode-dc1-2 -verbose

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Apr 30, 2019 8:17:49 PM
CVU home:                     /u01/18.3.0.0/grid_2/
User:                         oracle
[oracle@racnode-dc1-1 ~]$

GRID OOP Rollback Patching completed successfully for node1.

[root@racnode-dc1-1 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode-dc1-1 ~]#
[root@racnode-dc1-1 ~]# echo $GRID_HOME
/u01/18.3.0.0/grid_2
[root@racnode-dc1-1 ~]# $GRID_HOME/OPatch/opatchauto rollback -switch-clone -logLevel FINEST

OPatchauto session is initiated at Fri May  3 01:06:47 2019

System initialization log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchautodb/systemconfig2019-05-03_01-06-50AM.log.

Session log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/opatchauto2019-05-03_01-08-00AM.log
The id for this session is R47N

Update nodelist in the inventory for oracle home /u01/18.3.0.0/grid.
Update nodelist in the inventory is completed for oracle home /u01/18.3.0.0/grid.


Bringing down CRS service on home /u01/18.3.0.0/grid
CRS service brought down successfully on home /u01/18.3.0.0/grid


Starting CRS service on home /u01/18.3.0.0/grid
CRS service started successfully on home /u01/18.3.0.0/grid


Confirm that all resources have been started from home /u01/18.3.0.0/grid.
All resources have been started successfully from home /u01/18.3.0.0/grid.


OPatchAuto successful.

--------------------------------Summary--------------------------------
Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-1
Actual Home : /u01/18.3.0.0/grid_2
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1

OPatchauto session completed at Fri May  3 01:14:25 2019
Time taken to complete the session 7 minutes, 38 seconds

[root@racnode-dc1-1 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[root@racnode-dc1-1 ~]# /media/patch/findhomes.sh
   PID NAME                 ORACLE_HOME
 10486 asm_pmon_+asm1       /u01/18.3.0.0/grid/
 10833 apx_pmon_+apx1       /u01/18.3.0.0/grid/

[root@racnode-dc1-1 ~]# cat /etc/oratab
#Backup file is  /u01/app/oracle/12.1.0.1/db1/srvm/admin/oratab.bak.racnode-dc1-1 line added by Agent
#+ASM1:/u01/18.3.0.0/grid:N
hawk1:/u01/app/oracle/12.1.0.1/db1:N
hawk:/u01/app/oracle/12.1.0.1/db1:N             # line added by Agent
[root@racnode-dc1-1 ~]#

GRID OOP Rollback Patching completed successfully for node2.

[root@racnode-dc1-2 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode-dc1-2 ~]#
[root@racnode-dc1-2 ~]# echo $GRID_HOME
/u01/18.3.0.0/grid_2
[root@racnode-dc1-2 ~]# $GRID_HOME/OPatch/opatchauto rollback -switch-clone -logLevel FINEST

OPatchauto session is initiated at Fri May  3 01:21:39 2019

System initialization log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchautodb/systemconfig2019-05-03_01-21-41AM.log.

Session log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/opatchauto2019-05-03_01-22-46AM.log
The id for this session is 9RAT

Update nodelist in the inventory for oracle home /u01/18.3.0.0/grid.
Update nodelist in the inventory is completed for oracle home /u01/18.3.0.0/grid.


Bringing down CRS service on home /u01/18.3.0.0/grid
CRS service brought down successfully on home /u01/18.3.0.0/grid


Starting CRS service on home /u01/18.3.0.0/grid
CRS service started successfully on home /u01/18.3.0.0/grid


Confirm that all resources have been started from home /u01/18.3.0.0/grid.
All resources have been started successfully from home /u01/18.3.0.0/grid.


OPatchAuto successful.

--------------------------------Summary--------------------------------
Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-2
Actual Home : /u01/18.3.0.0/grid_2
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1


OPatchauto session completed at Fri May  3 01:40:51 2019
Time taken to complete the session 19 minutes, 12 seconds
[root@racnode-dc1-2 ~]#

GRID OOP Rollback completed successfully for 18.5.0.0.0.

GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1

Here is an example of inventory after rollback.

+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

Validation shows database is OFFLINE,

+ crsctl stat res -w '((TARGET != ONLINE) or (STATE != ONLINE)' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            IDLE,STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE                               Instance Shutdown,STABLE
      2        ONLINE  OFFLINE                               Instance Shutdown,STABLE

Start database FAILED.

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance not alive for sid "hawk2"

[oracle@racnode-dc1-2 ~]$ srvctl status database -d $ORACLE_UNQNAME -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is not running on node racnode-dc1-2

[oracle@racnode-dc1-2 ~]$ srvctl start database -d $ORACLE_UNQNAME
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
[oracle@racnode-dc1-2 ~]$


[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance not alive for sid "hawk1"

[oracle@racnode-dc1-1 ~]$ srvctl start database -d hawk
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy
CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
[oracle@racnode-dc1-1 ~]$

Incorrect permissions for oracle library was the cause.
Change permissions for $GRID_HOME/bin/oracle (chmod 6751 $GRID_HOME/bin/oracle), stop and start CRS resolved the failure.

[oracle@racnode-dc1-1 dbs]$ ls -lhrt $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle dba 314M Apr 20 16:06 /u01/app/oracle/12.1.0.1/db1/bin/oracle

[oracle@racnode-dc1-1 dbs]$ ls -lhrt /u01/18.3.0.0/grid/bin/oracle
-rwxr-x--x 1 oracle oinstall 396M Apr 20 19:21 /u01/18.3.0.0/grid/bin/oracle

[oracle@racnode-dc1-1 dbs]$ cd /u01/18.3.0.0/grid/bin/
[oracle@racnode-dc1-1 bin]$ chmod 6751 oracle
[oracle@racnode-dc1-1 bin]$ ls -lhrt /u01/18.3.0.0/grid/bin/oracle
-rwsr-s--x 1 oracle oinstall 396M Apr 20 19:21 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-1 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
[root@racnode-dc1-1 ~]# crsctl stop crs

====================================================================================================

[root@racnode-dc1-2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM2"

[root@racnode-dc1-2 ~]# ls -lhrt $GRID_HOME/bin/oracle
-rwxr-x--x 1 oracle oinstall 396M Apr 21 01:44 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-2 ~]# chmod 6751 $GRID_HOME/bin/oracle
[root@racnode-dc1-2 ~]# ls -lhrt $GRID_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 396M Apr 21 01:44 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-2 ~]# crsctl stop crs

====================================================================================================

[root@racnode-dc1-2 ~]# crsctl start crs
[root@racnode-dc1-1 ~]# crsctl start crs

Reference: RAC Database Can’t Start: ORA-01565, ORA-17503: ksfdopn:10 Failed to open file +DATA/BPBL/spfileBPBL.ora (Doc ID 2316088.1)

February 23, 2019

Sed’ing Through ora.cvu Hell

Filed under: 12c,awk_sed_grep,Grid Infrastructure — mdinh @ 12:02 pm

Don’t know why I always look for trouble.

The trouble found was CHECK_RESULTS from ora.cvu.type had many issues which look to be BUGS related.

Here is the RAC environment from VM.

[oracle@racnode-dc1-1 ~]$ cat /etc/system-release
Oracle Linux Server release 7.3
[oracle@racnode-dc1-1 ~]$

[oracle@racnode-dc1-1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [0] and no patches have been applied on the local node.

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [0].

[oracle@racnode-dc1-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
racnode-dc1-2
racnode-dc1-1
PRVF-5415 : Check to see if NTP daemon or service is running failed
PRVF-7573 : Sufficient swap size is not available on node "racnode-dc1-2" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB (2097148.0KB)]
PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm
PRCW-1015 : Wallet hawk does not exist.
CLSW-9: The cluster wallet to be operated on does not exist. :[1015]
PRVF-7573 : Sufficient swap size is not available on node "racnode-dc1-1" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB (2097148.0KB)]
PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm
PRCW-1015 : Wallet hawk does not exist.
CLSW-9: The cluster wallet to be operated on does not exist. :[1015]
[oracle@racnode-dc1-1 ~]$

BUGS?

Linux OL7/RHEL7: PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm (Doc ID 2065603.1)

Bug 24696235 – cvu check results shows errors PRCW-1015 and CLSW-9 (Doc ID 24696235.8)

[root@racnode-dc1-1 ~]# ocrdump
[root@racnode-dc1-1 ~]# cat OCRDUMPFILE |grep -i SYSTEM.WALLET
[SYSTEM.WALLET]
[SYSTEM.WALLET.APPQOSADMIN]
[SYSTEM.WALLET.MGMTDB]
[root@racnode-dc1-1 ~]#

There’s is indeed no wallet for database hawk. But if wallet is created, will only result in another bug?

cluvfy:PRCQ-1000 : An error occurred while establishing connection to database with user name “DBSNMP” (Doc ID 2288958.1)

PRCQ-1000 : An error occurred while establishing connection to database with user name "DBSNMP" and connect descriptor:
ORA-01017: invalid username/password; logon denied

Cluster Verification Utility (CVU) Check Fails With NTP Configuration (Doc ID 2162408.1)

Some Good References:

Slimming Down Oracle RAC 12c’s Resource Footprint

Oracle Grid Infrastructure: change the interval for the Cluster Verification Utility (ora.cvu)

Small Notes on Clusterware resource ora.cvu

July 22, 2018

Cluster Resource To Check When Patching RAC DBFS OGG

Filed under: GoldenGate,Grid Infrastructure,RAC — mdinh @ 2:41 pm

crsctl stat res|grep -i type|sort -u

TYPE=app.appvipx.type
TYPE=local_resource
TYPE=ora.asm.type
TYPE=ora.cluster_vip_net1.type
TYPE=ora.cvu.type
TYPE=ora.database.type
TYPE=ora.diskgroup.type
TYPE=ora.listener.type
TYPE=ora.mgmtdb.type
TYPE=ora.mgmtlsnr.type
TYPE=ora.network.type
TYPE=ora.oc4j.type
TYPE=ora.ons.type
TYPE=ora.scan_listener.type
TYPE=ora.scan_vip.type
TYPE=xag.goldengate.type


crsctl stat res -p -w 'TYPE = ora.database.type' | egrep '^NAME|AUTO_START'

crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'

crsctl stat res -t -w 'TYPE = xag.goldengate.type' -- OGG Resource
crsctl stat res -t -w 'TYPE = app.appvipx.type'    -- OGG VIP
crsctl stat res -t -w 'TYPE = local_resource'      -- DBFS Mount
crsctl stat res -t -w 'TYPE = ora.database.type'   -- DB resource (including DBFS)

You might ask, why not use crsctl stat res -t?

For this specific environment, there are 190 lines of output and needed to focus on what’s important.

Next Page »

Create a free website or blog at WordPress.com.