Thinking Out Loud

April 22, 2019

Final Conclusion for 18c Cluster upgrade state is [NORMAL]

Filed under: 18c,RAC — mdinh @ 3:46 am

Finally, I have reached a point that I can live with for Grid 18c upgrade because the process runs to completion without any error and intervention.

Note that ACFS Volume is created in CRS DiskGroup which may not be ideal for production.

Rapid Home Provisioning Server is configured and is not running.

The outcome is different depending on whether the upgrade is performed via GUI or silent as demonstrated 18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Rene Antunez also demonstrates another method UPGRADE ORACLE GI FROM 12.1 TO 18.5 FAILS AND LEAVES CRS WITH STATUS OF UPGRADE FINAL

While we both encountered the same error “Upgrading RHP Repository failed”, we accomplished the same results via different course of action.

The unexplained and unanswered questions is, “Why RHP Repository is being upgraded?”

Ultimately, it is cluvfy that change for cluster upgrade state and this is shown from gridSetupActions2019-04-21_02-10-47AM.log

INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Executing RHPUPGRADE

INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'

INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Executing CLUVFY
INFO: [Apr 21, 2019 2:46:34 AM] Command /u01/18.3.0.0/grid/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all

INFO: [Apr 21, 2019 2:51:37 AM] Completed Plugin named: cvu
INFO: [Apr 21, 2019 2:51:38 AM] ConfigClient.saveSession method called
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'

INFO: [Apr 21, 2019 2:51:38 AM] Successfully executed the flow in SILENT mode
INFO: [Apr 21, 2019 2:51:39 AM] inventory location is/u01/app/oraInventory
INFO: [Apr 21, 2019 2:51:39 AM] Exit Status is 0
INFO: [Apr 21, 2019 2:51:39 AM] Shutdown Oracle Grid Infrastructure 18c Installer

I would suggest to run the last step using GUI if feasible versus silent to see what is happening:

/u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

So how did I get myself into this predicament? I followed blindly. I trust but did not verify.

18.1.0.0 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.4 and later on Oracle Linux (Doc ID 2369422.1)

Step 2.1 - Understand how MGMTDB is handled during upgrade

****************************************************************************************************
Upgrading GI 18.1 does not require upgrading MGMTDB nor does it require installing a MGMTDB if it currently does not exist. 
It's the user's discretion to maintain and upgrade the MGMTDB for their application needs.
****************************************************************************************************

Note: MGMTDB is required when using Rapid Host Provisioning. 
The Cluster Health Monitor functionality will not work without MGMTDB configured.
If you consider to install a MGMTDB later,  it is configured to use 1G of SGA and 500 MB of PGA. 
MGMTDB SGA will not be allocated in hugepages (this is because it's init.ora setting 'use_large_pages' is set to false.

The following parameters from (Doc ID 2369422.1) were the root cause for all the issues in my test cases.

Because MGMTDB is not required, it makes sense to set the following but resulted in chaos.

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

How To Setup a Rapid Home Provisioning (RHP) Server and Client (Doc ID 2097026.1)

Starting with Oracle Grid Infrastructure 18.1.0.0.0, when you install Oracle Grid Infrastructure, the Rapid Home Provisioning Server is configured, by default, in the local mode to support the local switch home capability. 

Rapid Home Provisioning Server is configured, by default and there does not look to be documented or easily found option to not install or bypass default.

RHPS is interchangeable between Server and Service.

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

Here is what worked from end to end without any failure or invention.
The response file was ***not*** modified for each of the test cases.

/u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs \
-applyRU /media/patch/Jan2019/28828717 \
-responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

Here is what the environment looks like after the 18c GI upgrade.

Notice ACFS is configured for RHP.

[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
+ exit

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc1-1 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.chad
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-1            169.254.7.214 172.16
                                                             .9.10,STABLE
ora.asm
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc1-1            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.rhpserver
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ srvctl config mgmtdb -all
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
  /u01/18.3.0.0/grid on node racnode-dc1-1
Oracle user: oracle
Spfile: +CRS/_MGMTDB/PARAMETERFILE/spfile.271.1006137461
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: GIMR_DSCREP_10
PDB service: GIMR_DSCREP_10
Cluster name: vbox-rac-dc1
Management database is enabled.
Management database is individually enabled on nodes:
Management database is individually disabled on nodes:
Database instance: -MGMTDB

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.crs.ghchkpt.acfs -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w 'TYPE = ora.acfs.type' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init
NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1

[oracle@racnode-dc1-1 ~]$ mount|egrep -i 'asm|ghchkpt'
oracleasmfs on /dev/oracleasm type oracleasmfs (rw,relatime)

[oracle@racnode-dc1-1 ~]$ acfsutil version
acfsutil version: 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ acfsutil registry
Mount Object:
  Device: /dev/asm/ghchkpt-61
  Mount Point: /opt/oracle/rhp_images/chkbase
  Disk Group: CRS
  Volume: GHCHKPT
  Options: none
  Nodes: all
  Accelerator Volumes:

[oracle@racnode-dc1-1 ~]$ acfsutil info fs
acfsutil info fs: ACFS-03036: no mounted ACFS file systems

[oracle@racnode-dc1-1 ~]$ acfsutil info storage
Diskgroup      Consumer      Space     Size With Mirroring  Usable Free  %Free   Path
CRS                          59.99              59.99          34.95       58%
DATA                         99.99              99.99          94.76       94%
FRA                          59.99              59.99          59.43       99%
----
unit of measurement: GB

[root@racnode-dc1-1 ~]# srvctl start filesystem -device /dev/asm/ghchkpt-61
PRCA-1138 : failed to start one or more file system resources:
CRS-2501: Resource 'ora.crs.ghchkpt.acfs' is disabled
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ asmcmd -V
asmcmd version 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_diskoting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    35784                0           35784                        Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304    102396    97036                0           97036                        N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    60856                0           60856                        N  FRA/

[oracle@racnode-dc1-1 ~]$ srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is not running

[oracle@racnode-dc1-1 ~]$ ps -ef|grep [p]mon
oracle    3571     1  0 02:40 ?        00:00:03 mdb_pmon_-MGMTDB
oracle   17109     1  0 Apr20 ?        00:00:04 asm_pmon_+ASM1
oracle   17531     1  0 Apr20 ?        00:00:06 ora_pmon_hawk1
[oracle@racnode-dc1-1 ~]$

Let me show you how this is convoluted.
In my case, it’s easy because there were only 2 actions performed.
Do you know what GridSetupAction was performed based on the directory name?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 18:59 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 18:56 GridSetupActions2019-04-21_02-10-47AM

This is how you can find out.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 19:20 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 19:22 GridSetupActions2019-04-21_02-10-47AM

================================================================================
### gridSetup.sh -silent -skipPrereqs -applyRU
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ ll
total 13012
-rw-r----- 1 oracle oinstall   20562 Apr 20 19:09 AttachHome2019-04-20_06-51-48PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall       0 Apr 20 18:59 gridSetupActions2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall 7306374 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall 2374182 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall 3582408 Apr 20 18:59 installerPatchActions_2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall       0 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall       0 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall     157 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall      29 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.out.racnode-dc1-2
drwxrwx--- 2 oracle oinstall    4096 Apr 20 19:01 temp_ob
-rw-r----- 1 oracle oinstall   12467 Apr 20 19:09 time2019-04-20_06-51-48PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep ROOTSH_LOCATION gridSetupActions2019-04-20_06-51-48PM.log
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/rootupgrade.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep "Execute Root Scripts successful" time2019-04-20_06-51-48PM.log
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914

================================================================================
### gridSetup.sh -executeConfigTools -silent
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ ll
total 1116
-rw-r----- 1 oracle oinstall       0 Apr 21 02:10 gridSetupActions2019-04-21_02-10-47AM.err
-rw-r----- 1 oracle oinstall  122568 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall 1004378 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.out
-rw-r----- 1 oracle oinstall     129 Apr 21 02:10 installerPatchActions_2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall    3155 Apr 21 02:51 time2019-04-21_02-10-47AM.log

oracle@racnode-dc1-1:hawk1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep rhprepos *
gridSetupActions2019-04-21_02-10-47AM.log:INFO:  [Apr 21, 2019 2:45:37 AM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep executeSelectedTools gridSetupActions2019-04-21_02-10-47AM.log
INFO:  [Apr 21, 2019 2:11:37 AM] Entering ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate oAggregate=oracle.crs:oracle.crs:18.0.0.0.0:common
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate action assigned
INFO:  [Apr 21, 2019 2:51:38 AM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 21, 2019 2:51:38 AM] Exiting ConfigClient.executeSelectedToolsInAggregate method

It might be better to use GUI if available but be careful.

For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.

I was using X and connection was lost during the upgrade. It was a kiss of death with this being the last screen capture.

Rene’s quote:

After looking for information in MOS, there wasn’t much that could lead me on how to solve the issue, just a lot of bugs related to the RHP repository.

I was lucky enough to get on a call with a good friend (@_rickgonzalez ) who is the PM of the RHP and we were able to work through it. So below is what I was able to do to solve the issue.

Also it was confirmed by them , that this is a bug in the upgrade process of 18.X, so hopefully they will be fixing it soon.

I concur and conclude, the process for GI 18c Upgrade is overly complicated, convoluted, contradicting, and not clearly documented, all having to do with MGMTDB and Rapid Home Provisioning (RHP) repository.

Unless you’re lucky or know someone, good luck with your upgrade.

Lastly, it would be greatly appreciated if you would share your upgrade experiences and/or results.

Did you use GUI or silent?

Advertisements

April 16, 2019

18c Upgrade: Failed gridSetup.sh -executeConfigTools: Cluster upgrade state is [UPGRADE FINAL]

Filed under: 18c,RAC — mdinh @ 9:53 pm

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

This is a multi-part series for 18c Upgrade and suggest read the above 2 posts first.

Commands for gridSetup.sh

+ /u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs -applyRU /media/patch/Jan2019/28828717 -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false
Preparing the home to patch...
Applying the patch /media/patch/Jan2019/28828717...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/installerPatchActions_2019-04-16_06-19-12AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/18.3.0.0/grid/install/response/grid_2019-04-16_06-19-12AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/gridSetupActions2019-04-16_06-19-12AM.log

As a root user, execute the following script(s):
        1. /u01/18.3.0.0/grid/rootupgrade.sh

Execute /u01/18.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc1-1, racnode-dc1-2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp [-silent]


+ exit
oracle@racnode-dc1-1::/home/oracle
$

Basically, the error provided is utterly useless.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
oracle@racnode-dc1-1::/home/oracle

Check logs from directory /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cd /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ ls -alrt
total 1072
-rw-r----- 1 oracle oinstall     130 Apr 16 12:59 installerPatchActions_2019-04-16_12-59-56PM.log
-rw-r----- 1 oracle oinstall       0 Apr 16 12:59 gridSetupActions2019-04-16_12-59-56PM.err
drwxrwx--- 8 oracle oinstall    4096 Apr 16 13:01 ..
-rw-r----- 1 oracle oinstall 1004378 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.out
-rw-r----- 1 oracle oinstall    2172 Apr 16 13:01 time2019-04-16_12-59-56PM.log ***
-rw-r----- 1 oracle oinstall   73047 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.log ***
drwxrwx--- 2 oracle oinstall    4096 Apr 16 13:01 .

Check time2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ cat time2019-04-16_12-59-56PM.log
 # Message # ElapsedTime # Current Time ( ms )
 # Starting step:INITIALIZE_ACTION of state:init #  0  # 1555412405106
 # Finished step:INITIALIZE_ACTION of state:init # 1 # 1555412405106
 # Starting step:EXECUTE of state:init #  0  # 1555412405108
 # Finished step:EXECUTE of state:init # 3 # 1555412405111
 # Starting step:VALIDATE of state:init #  0  # 1555412405113
 # Finished step:VALIDATE of state:init # 2 # 1555412405115
 # Starting step:TRANSITION of state:init #  0  # 1555412405115
 # Finished step:TRANSITION of state:init # 2 # 1555412405117
 # Starting step:EXECUTE of state:CRSConfigTools #  0  # 1555412405117
 # Finished step:EXECUTE of state:CRSConfigTools # 813 # 1555412405930
 # Starting step:VALIDATE of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:VALIDATE of state:CRSConfigTools # 0 # 1555412405930
 # Starting step:TRANSITION of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:TRANSITION of state:CRSConfigTools # 26591 # 1555412432521
 # Starting step:INITIALIZE_ACTION of state:setup #  0  # 1555412432521
 # Finished step:INITIALIZE_ACTION of state:setup # 0 # 1555412432521
 # Starting step:EXECUTE of state:setup #  0  # 1555412432522
 # Finished step:EXECUTE of state:setup # 6 # 1555412432528
 # Configuration in progress. #  0  # 1555412436788
 # Update Inventory in progress. #  0  # 1555412437768
 # Update Inventory successful. # 52612 # 1555412490380
 # Upgrading RHP Repository in progress. #  0  # 1555412490445

================================================================================
 # Upgrading RHP Repository failed. # 12668 # 1555412503112
================================================================================

 # Starting step:VALIDATE of state:setup #  0  # 1555412503215
 # Finished step:VALIDATE of state:setup # 15 # 1555412503230
 # Starting step:TRANSITION of state:setup #  0  # 1555412503230
 # Finished step:TRANSITION of state:setup # 0 # 1555412503230
 # Starting step:EXECUTE of state:finish #  0  # 1555412503230
 # Finished step:EXECUTE of state:finish # 6 # 1555412503236
 # Starting step:VALIDATE of state:finish #  0  # 1555412503237
 # Finished step:VALIDATE of state:finish # 1 # 1555412503238
 # Starting step:TRANSITION of state:finish #  0  # 1555412503238
 # Finished step:TRANSITION of state:finish # 0 # 1555412503238

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

Check gridSetupActions2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -B2 -A100 'Executing RHPUPGRADE' gridSetupActions2019-04-16_12-59-56PM.log
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn.handleProcess() entered.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: getting configAssistantParmas.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: checking secretArguments.
INFO:  [Apr 16, 2019 1:01:30 PM] No arguments to pass to stdin
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: starting read loop.
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 16, 2019 1:01:43 PM] Exiting ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus SUCCESS_MINUS_RECTOOL to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Calling event ConfigSessionEnding
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.endSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Configuration
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus FAILURE to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] All forked task are completed at state setup
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <setup>

================================================================================
WARNING:  [Apr 16, 2019 1:01:43 PM] [WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
================================================================================

INFO:  [Apr 16, 2019 1:01:43 PM] Advice is CONTINUE
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <setup>
INFO:  [Apr 16, 2019 1:01:43 PM] Verifying route success
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Executing action at state finish
INFO:  [Apr 16, 2019 1:01:43 PM] FinishAction Actions.execute called
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] Completed executing action at state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Moved to state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <finish>
WARNING:  [Apr 16, 2019 1:01:43 PM] Validation disabled for the state finish
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Terminating all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Terminated all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Successfully executed the flow in SILENT mode
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] inventory location is/u01/app/oraInventory
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application

================================================================================
INFO:  [Apr 16, 2019 1:01:43 PM] Exit Status is -1
INFO:  [Apr 16, 2019 1:01:43 PM] Shutdown Oracle Grid Infrastructure 18c Installer
INFO:  [Apr 16, 2019 1:01:43 PM] Unloading Setup Driver
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

Due to Exit Status is -1 is probably why – The cluster upgrade state is [UPGRADE FINAL]

Why Upgrading RHP Repository when oracle_install_crs_ConfigureRHPS=false?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -i rhp *
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:04 PM] Setting value for the property:oracle_install_crs_ConfigureRHPS in the bean:CRSInstallSettings
gridSetupActions2019-04-16_12-59-56PM.log: oracle_install_crs_ConfigureRHPS                       false
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Created config job for rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Selecting job named 'Upgrading RHP Repository' for retry
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Started Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository in progress. #  0  # 1555412490445
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository failed. # 12668 # 1555412503112
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

ora.cvu does not report any errors.

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

Run rhprepos upgradeSchema -fromversion 12.1.0.2.0 – FAILED.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
PRCT-1474 : failed to run 'mgmtca' on node racnode-dc1-2.

oracle@racnode-dc1-1::/home/oracle
$ ps -ef|grep pmon
oracle    9722  4804  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle   10380     1  0 13:46 ?        00:00:01 asm_pmon_+ASM1
oracle   10974     1  0 13:46 ?        00:00:01 apx_pmon_+APX1
oracle   11218     1  0 13:47 ?        00:00:02 ora_pmon_hawk1
oracle@racnode-dc1-1::/home/oracle
$ ssh racnode-dc1-2
Last login: Tue Apr 16 18:44:30 2019

----------------------------------------
Welcome to racnode-dc1-2
OracleLinux 7.3 x86_64

FQDN: racnode-dc1-2.internal.lab
IP:   10.0.2.15

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5709 MB
Kernel:    4.1.12-61.1.18.el7uek.x86_64

----------------------------------------

oracle@racnode-dc1-2::/home/oracle
$ ps -ef|grep pmon
oracle    9219     1  0 13:44 ?        00:00:01 asm_pmon_+ASM2
oracle   10113     1  0 13:45 ?        00:00:01 apx_pmon_+APX2
oracle   10619     1  0 13:45 ?        00:00:01 ora_pmon_hawk2
oracle   13200 13178  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle@racnode-dc1-2::/home/oracle
$

In conclusion, the silent upgrade process is poorly documented at best.

Starting to wondering if the following parameters contributed to the issue:

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

April 15, 2019

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Filed under: 18c,RAC — mdinh @ 12:54 am

There are/were a lot of discussions about Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]
on how cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade could have changed the cluster upgrade state to [NORMAL].

Running gridSetup.sh -executeConfigTools in silent mode, the next step cluvfy is not run.

[oracle@racnode-dc1-1 ~]$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-15_01-02-06AM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
[oracle@racnode-dc1-1 ~]$

Running gridSetup.sh -executeConfigTools in GUI, there is an option to ignore Failed Upgrading RHP Repository and continue to the next step to run cluvfy.

I don’t think cluvfy modify the state of the cluster but rather ora.cvu did due to the existing of the following files.

[root@racnode-dc1-1 install]# pwd
/u01/app/oracle/crsdata/@global/cvu/baseline/install
[root@racnode-dc1-1 install]# ll
total 36000
-rw-r--r-- 1 oracle oinstall 35958465 Apr 14 06:05 grid_install_12.1.0.2.0.xml
-rw-r--r-- 1 oracle oinstall   901803 Apr 15 01:42 grid_install_18.0.0.0.0.zip
[root@racnode-dc1-1 install]# 

When checking RESULTS from ora.cvu, there are no errors.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-1 ~]$
[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
[oracle@racnode-dc1-1 ~]$ 

Hell! What do I know as I am just a RAC novice and happy the cluster state is what it should be.

gridsetup_upgrade.rsp used for upgrade.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

April 13, 2019

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

Filed under: 18c,RAC — mdinh @ 11:13 pm

After upgrade and apply RU for Grid 18c, the cluster upgrade state was not NORMAL.

The cluster upgrade state is [UPGRADE FINAL] which I have never seen before.

Searching Oracle Support was useless as I was only able to find the following states:

The cluster upgrade state is [NORMAL]
The cluster upgrade state is [FORCED]
The cluster upgrade state is [ROLLING PATCH]

The following checks were performed after upgrade:

[oracle@racnode-dc1-1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].

[oracle@racnode-dc1-1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].

[oracle@racnode-dc1-1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
[oracle@racnode-dc1-1 ~]$

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-1 ~]#


[oracle@racnode-dc1-2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[oracle@racnode-dc1-2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]

[oracle@racnode-dc1-2 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2532936542].

[oracle@racnode-dc1-2 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].

[oracle@racnode-dc1-2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

[root@racnode-dc1-2 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-2 ~]#

Check OCR: Grid Infrastructure Upgrade : The cluster upgrade state is [FORCED] (Doc ID 2482606.1)
I was desperate and OCR was fine.

[root@racnode-dc1-1 ~]# olsnodes -c
vbox-rac-dc1

[root@racnode-dc1-1 ~]# olsnodes -t -a -s -n
racnode-dc1-1   1       Active  Hub     Unpinned
racnode-dc1-2   2       Active  Hub     Unpinned

[root@racnode-dc1-1 ~]# $GRID_HOME/bin/ocrdump /tmp/ocrdump.txt

[root@racnode-dc1-1 ~]# grep SYSTEM.version.hostnames /tmp/ocrdump.txt
[SYSTEM.version.hostnames]
[SYSTEM.version.hostnames.racnode-dc1-1]
[SYSTEM.version.hostnames.racnode-dc1-1.patchlevel]
[SYSTEM.version.hostnames.racnode-dc1-1.site]
[SYSTEM.version.hostnames.racnode-dc1-2]
[SYSTEM.version.hostnames.racnode-dc1-2.patchlevel]
[SYSTEM.version.hostnames.racnode-dc1-2.site]
[root@racnode-dc1-1 ~]#

Thanks to my friend Vlatko J. https://twitter.com/jvlatko

Run cluvfy:

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ which cluvfy
/u01/18.3.0.0/grid/bin/cluvfy

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade

Baseline collected.
Collection report for this execution is saved in file "/u01/app/oracle/crsdata/@global/cvu/baseline/install/grid_install_18.0.0.0.0.zip".

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Apr 13, 2019 11:05:58 PM
CVU home:                     /u01/18.3.0.0/grid/
User:                         oracle
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

After running cluvfy, the cluster upgrade state is [NORMAL].

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-1 ~]#

[root@racnode-dc1-2 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-2 ~]#

February 20, 2019

opatchauto is not that dumb

Filed under: 12c,PSU,RAC — mdinh @ 11:41 pm

I find it ironic that we want to automate yet fear automation.

Per documentation, ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1),
the following patch is required to implement ACFS:

p22810422_12102160419forACFS_Linux-x86-64.zip
Patch for Bug# 22810422
UEKR4 SUPPORT FOR ACFS(Patch 22810422)

I had inquired why opatchauto is not used to patch the entire system versus manual patching for GI ONLY.

To patch GI home and all Oracle RAC database homes of the same version:
# opatchauto apply _UNZIPPED_PATCH_LOCATION_/22810422 -ocmrf _ocm response file_

OCM is not included in OPatch binaries since OPatch version 12.2.0.1.5; therefore, -ocmrf is not needed.
Reason for GI only patching is simply because we only need to do it to enable ACFS support.

Rationale makes sense and typically ACFS is only applied to GI home.

Being curious, shouldn’t opatchauto know what homes to apply the patch where applicable?
Wouldn’t be easier to execute opatchauto versus performing manual steps?

What do you think and which approach would you use?

Here are the results from applying patch 22810422.


[oracle@racnode-dc1-2 22810422]$ pwd
/sf_OracleSoftware/22810422

[oracle@racnode-dc1-2 22810422]$ sudo su -
Last login: Wed Feb 20 23:13:52 CET 2019 on pts/0

[root@racnode-dc1-2 ~]# . /media/patch/gi.env
ORACLE_SID = [root] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"

[root@racnode-dc1-2 ~]# export PATH=$PATH:$GRID_HOME/OPatch

[root@racnode-dc1-2 ~]# opatchauto apply /sf_OracleSoftware/22810422 -analyze

OPatchauto session is initiated at Wed Feb 20 23:21:46 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-21-53PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-22-12PM.log
The id for this session is YBG6

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racnode-dc1-2
RAC Home:/u01/app/oracle/12.1.0.1/db1


==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-2
CRS Home:/u01/app/12.1.0.1/grid


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-22-24PM_1.log



OPatchauto session completed at Wed Feb 20 23:24:53 2019
Time taken to complete the session 3 minutes, 7 seconds


[root@racnode-dc1-2 ~]# opatchauto apply /sf_OracleSoftware/22810422

OPatchauto session is initiated at Wed Feb 20 23:25:12 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-25-19PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-25-38PM.log
The id for this session is 3BYS

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1


Preparing to bring down database service on home /u01/app/oracle/12.1.0.1/db1
Successfully prepared home /u01/app/oracle/12.1.0.1/db1 to bring down database service


Bringing down CRS service on home /u01/app/12.1.0.1/grid
Prepatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-2_2019-02-20_11-28-00PM.log
CRS service brought down successfully on home /u01/app/12.1.0.1/grid


Start applying binary patch on home /u01/app/12.1.0.1/grid
Binary patch applied successfully on home /u01/app/12.1.0.1/grid


Starting CRS service on home /u01/app/12.1.0.1/grid
Postpatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-2_2019-02-20_11-36-59PM.log
CRS service started successfully on home /u01/app/12.1.0.1/grid


Preparing home /u01/app/oracle/12.1.0.1/db1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/12.1.0.1/db1 successfully after database service restarted

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-2
RAC Home:/u01/app/oracle/12.1.0.1/db1
Summary:

==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-2
CRS Home:/u01/app/12.1.0.1/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-30-39PM_1.log



OPatchauto session completed at Wed Feb 20 23:40:10 2019
Time taken to complete the session 14 minutes, 58 seconds
[root@racnode-dc1-2 ~]#

[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
ORACLE_SID = [hawk2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
22810422;ACFS Interim patch for 22810422

OPatch succeeded.

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/12.1.0.1/db1".

OPatch succeeded.
[oracle@racnode-dc1-2 ~]$

====================================================================================================

### Checking resources while patching racnode-dc1-1
[oracle@racnode-dc1-2 ~]$ crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE                               STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  INTERMEDIATE racnode-dc1-2            FAILED OVER,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.asm
               ONLINE  ONLINE       racnode-dc1-2            Started,STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-2            169.254.178.60 172.1
                                                             6.9.11,STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  INTERMEDIATE racnode-dc1-2            FAILED OVER,STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-2 ~]$


====================================================================================================

[oracle@racnode-dc1-1 ~]$ sudo su -
Last login: Wed Feb 20 23:02:19 CET 2019 on pts/0

[root@racnode-dc1-1 ~]# . /media/patch/gi.env
ORACLE_SID = [root] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"

[root@racnode-dc1-1 ~]# export PATH=$PATH:$GRID_HOME/OPatch

[root@racnode-dc1-1 ~]# opatchauto apply /sf_OracleSoftware/22810422 -analyze

OPatchauto session is initiated at Wed Feb 20 23:43:46 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-43-54PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-44-12PM.log
The id for this session is M9KF

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racnode-dc1-1
RAC Home:/u01/app/oracle/12.1.0.1/db1


==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-1
CRS Home:/u01/app/12.1.0.1/grid


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-44-26PM_1.log



OPatchauto session completed at Wed Feb 20 23:46:31 2019
Time taken to complete the session 2 minutes, 45 seconds

[root@racnode-dc1-1 ~]# opatchauto apply /sf_OracleSoftware/22810422

OPatchauto session is initiated at Wed Feb 20 23:47:13 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-47-20PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-47-38PM.log
The id for this session is RHMR

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1


Preparing to bring down database service on home /u01/app/oracle/12.1.0.1/db1
Successfully prepared home /u01/app/oracle/12.1.0.1/db1 to bring down database service


Bringing down CRS service on home /u01/app/12.1.0.1/grid
Prepatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-1_2019-02-20_11-50-01PM.log
CRS service brought down successfully on home /u01/app/12.1.0.1/grid


Start applying binary patch on home /u01/app/12.1.0.1/grid
Binary patch applied successfully on home /u01/app/12.1.0.1/grid


Starting CRS service on home /u01/app/12.1.0.1/grid
Postpatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-1_2019-02-20_11-58-57PM.log
CRS service started successfully on home /u01/app/12.1.0.1/grid


Preparing home /u01/app/oracle/12.1.0.1/db1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/12.1.0.1/db1 successfully after database service restarted

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-1
RAC Home:/u01/app/oracle/12.1.0.1/db1
Summary:

==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-1
CRS Home:/u01/app/12.1.0.1/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-52-37PM_1.log



OPatchauto session completed at Thu Feb 21 00:01:15 2019
Time taken to complete the session 14 minutes, 3 seconds

[root@racnode-dc1-1 ~]# logout

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
ORACLE_SID = [hawk1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
22810422;ACFS Interim patch for 22810422

OPatch succeeded.

[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/12.1.0.1/db1".

OPatch succeeded.
[oracle@racnode-dc1-1 ~]$

August 25, 2018

How Backup Spfile to Pfile Saved My Arse

Filed under: ASM,RAC,Recovery — mdinh @ 4:43 pm

Typically when I perform backup review, I always suggest to add the following:

run {
allocate channel c1 device type disk;
SQL "alter database backup controlfile to trace as ''/tmp/ctl_@_trace.sql'' reuse resetlogs";
SQL "create pfile=''/tmp/init@.ora'' from spfile";
release channel c1;
}

Example:

RMAN> run {
allocate channel c1 device type disk;
2> 3> SQL "alter database backup controlfile to trace as ''/tmp/ctl_@_trace.sql'' reuse resetlogs";
4> SQL "create pfile=''/tmp/init@.ora'' from spfile";
5> release channel c1;
6> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=31 instance=hawk1 device type=DISK

sql statement: alter database backup controlfile to trace as ''/tmp/ctl_@_trace.sql'' reuse resetlogs

sql statement: create pfile=''/tmp/init@.ora'' from spfile

released channel: c1

RMAN> exit

[oracle@racnode-dc1-1 tmp]$ ll /tmp/init*
-rw-r--r-- 1 oracle dba 1978 Aug 25 21:11 /tmp/inithawk1.ora
[oracle@racnode-dc1-1 tmp]$ ll /tmp/ctl*
-rw-r--r-- 1 oracle dba 7318 Aug 25 21:11 /tmp/ctl_hawk1_trace.sql
[oracle@racnode-dc1-1 tmp]$ sysresv|tail -1
Oracle Instance alive for sid "hawk1"
[oracle@racnode-dc1-1 tmp]$

When on RAC, do not create pfile to it’s default destination, e.g. SQL “create pfile from spfile”;

ORIGINAL LOCATION:
====================================================================================================
$ asmcmd ls -l +DATA/SOXPA/PARAMETERFILE
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 17 10:00:00  Y    spfile.321.984394591
PARAMETERFILE  UNPROT  COARSE   AUG 17 10:00:00  N    spfilehawka.ora => +DATA/HAWKA/PARAMETERFILE/spfile.321.984394591

BIG OOPS: NOTHING THERE!
====================================================================================================
$ asmcmd ls -l +DATA/HAWKA/PARAMETERFILE

ERROR: Created wrong spfile.
====================================================================================================
$ asmcmd ls -l +DATA/WH02A/PARAMETERFILE
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 23 19:00:00  Y    spfile.380.984672023
PARAMETERFILE  UNPROT  COARSE   AUG 23 19:00:00  N    spfilewh02a.ora => +DATA/WH02A/PARAMETERFILE/spfile.380.984672023
PARAMETERFILE  UNPROT  COARSE   AUG 24 14:00:00  N    spfilehawka.ora => +DATA/WH01A/PARAMETERFILE/spfile.321.985008497

Create copy of pfile.
====================================================================================================
$ ll *.good
-rw-r--r--. 1 oracle oinstall 2207 Aug 24 14:57 inithawka4.ora.good

Check controlfile location for pfile.
====================================================================================================
$ cat inithawka4.ora.good 
*.control_files='+DATA/hawka/controlfile/current.272.984393927','+FRA/hawka/controlfile/current.307.984393927'#Restore Controlfile

Check database and create new spfile from pfile.
====================================================================================================
HOST04:(SYS@hawka4):PRIMARY> show parameter spfile;

NAME                           TYPE        VALUE
------------------------------ ----------- ----------------------------------------------------------------------------------------------------
spfile                         string      +DATA/hawka/parameterfile/spfilehawka.ora

HOST04:(SYS@hawka4):PRIMARY> show parameter control_file 

NAME                           TYPE        VALUE
------------------------------ ----------- ----------------------------------------------------------------------------------------------------
control_file_record_keep_time  integer     7
control_files                  string      +DATA/hawka/controlfile/current.272.984393927, +FRA/hawka/controlfile/current.307.984393927

HOST04:(SYS@hawka4):PRIMARY> create spfile='+DATA/HAWKA/PARAMETERFILE/spfilehawka4.ora' from pfile='/u01/app/oracle/db/11.2.0.4/dbs/inithawka4.ora.good';

File created.

HOST04:(SYS@hawka4):PRIMARY> exit

rmalias and mkalias for NEW SPFILE.
====================================================================================================
$ asmcmd ls -l +DATA/HAWKA/PARAMETERFILE
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  Y    spfile.1077.985015077
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  N    spfilehawka4.ora => +DATA/HAWKA/PARAMETERFILE/spfile.1077.985015077

$ asmcmd
ASMCMD> cd +DATA/HAWKA/PARAMETERFILE
ASMCMD> ls -lt
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  N    spfilehawka4.ora => +DATA/HAWKA/PARAMETERFILE/spfile.1077.985015077
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  Y    spfile.1077.985015077

ASMCMD> rmalias spfilehawka4.ora

ASMCMD> mkalias +DATA/HAWKA/PARAMETERFILE/spfile.1077.985015077 spfilehawka.ora

ASMCMD> ls -lt
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  N    spfilehawka.ora => +DATA/HAWKA/PARAMETERFILE/spfile.1077.985015077
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  Y    spfile.1077.985015077
ASMCMD> exit

$ asmcmd ls -l +DATA/HAWKA/PARAMETERFILE
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  Y    spfile.1077.985015077
PARAMETERFILE  UNPROT  COARSE   AUG 24 15:00:00  N    spfilehawka.ora => +DATA/HAWKA/PARAMETERFILE/spfile.1077.985015077
$ 

Verify pfile can be created from spfile.
====================================================================================================
HOST04:(SYS@hawka4):PRIMARY> create pfile='/tmp/init@.ora' from spfile;

File created.

HOST04:(SYS@hawka4):PRIMARY> exit
====================================================================================================
oracle@p2dbccx04:hawka4:/home/oracle
$ ll /tmp/inithawka4.ora
-rw-r--r--. 1 oracle asmadmin 2207 Aug 24 15:24 /tmp/inithawka4.ora

July 22, 2018

Cluster Resource To Check When Patching RAC DBFS OGG

Filed under: GoldenGate,Grid Infrastructure,RAC — mdinh @ 2:41 pm

crsctl stat res|grep -i type|sort -u

TYPE=app.appvipx.type
TYPE=local_resource
TYPE=ora.asm.type
TYPE=ora.cluster_vip_net1.type
TYPE=ora.cvu.type
TYPE=ora.database.type
TYPE=ora.diskgroup.type
TYPE=ora.listener.type
TYPE=ora.mgmtdb.type
TYPE=ora.mgmtlsnr.type
TYPE=ora.network.type
TYPE=ora.oc4j.type
TYPE=ora.ons.type
TYPE=ora.scan_listener.type
TYPE=ora.scan_vip.type
TYPE=xag.goldengate.type


crsctl stat res -p -w 'TYPE = ora.database.type' | egrep '^NAME|AUTO_START'

crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'

crsctl stat res -t -w 'TYPE = xag.goldengate.type' -- OGG Resource
crsctl stat res -t -w 'TYPE = app.appvipx.type'    -- OGG VIP
crsctl stat res -t -w 'TYPE = local_resource'      -- DBFS Mount
crsctl stat res -t -w 'TYPE = ora.database.type'   -- DB resource (including DBFS)

You might ask, why not use crsctl stat res -t?

For this specific environment, there are 190 lines of output and needed to focus on what’s important.

July 20, 2018

Patching GoldenGate with DBFS

Filed under: GoldenGate,Grid Infrastructure,RAC — mdinh @ 11:41 pm

There seems to be no consistency as to what directories should be on DBFS for when GoldenGate is implemented with RAC.

Here I will share my thoughts based on issues encountered.

oracle@test1:/opt/oracle/12.2.0/ggs01$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.170221 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_170123.1033_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Jan 23 2017 21:54:15
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.



GGSCI (test1) 1> create subdirs

Creating subdirectories under current directory /oracle/12.2.0/ggs01

Parameter files                /oracle/12.2.0/ggs01/dirprm: created
Report files                   /oracle/12.2.0/ggs01/dirrpt: created
Checkpoint files               /oracle/12.2.0/ggs01/dirchk: created
Process status files           /oracle/12.2.0/ggs01/dirpcs: created
SQL script files               /oracle/12.2.0/ggs01/dirsql: created
Database definitions files     /oracle/12.2.0/ggs01/dirdef: created
Extract data files             /oracle/12.2.0/ggs01/dirdat: created
Temporary files                /oracle/12.2.0/ggs01/dirtmp: created
Credential store files         /oracle/12.2.0/ggs01/dircrd: created
Masterkey wallet files         /oracle/12.2.0/ggs01/dirwlt: created
Dump files                     /oracle/12.2.0/ggs01/dirdmp: created


GGSCI (test1) 2> 


$ ls -ld dir*
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirchk -> /dbfs_client/ggs01/dirchk
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dircrd -> /dbfs_client/ggs01/dircrd
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirdat -> /dbfs_client/ggs01/dirdat
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirdef -> /dbfs_client/ggs01/dirdef
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirdmp -> /dbfs_client/ggs01/dirdmp
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirout -> /dbfs_client/ggs01/dirout
drwxr-x--- 2 ggsuser oinstall 4096 Mar 20  2017 dirpcs
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirprm -> /dbfs_client/ggs01/dirprm
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirrpt -> /dbfs_client/ggs01/dirrpt
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirsql -> /dbfs_client/ggs01/dirsql

GoldenGate maintains data that it swaps to disk in dirtmp.
With all the issues for DBFS, might be better on local.
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirtmp -> /dbfs_client/ggs01/dirtmp

lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirwlt -> /dbfs_client/ggs01/dirwlt
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirwww -> /dbfs_client/ggs01/dirwww
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 BR -> /dbfs_client/ggs01/BR

Here are errors when applying GoldenGate Patchset.

The errors were due to the stack being down from after running opatchauto apply -norestart which results in DBFS offline for the instance.

Errors can be avoided if directories are local as they should be.

The following actions have failed:
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirout
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style

Use Oracle RAC database as a baseline.
Are alert logs, trace files, etc… on shared volume if Oracle software is installed locally?

July 19, 2018

Playing With Service Relocation 12c

Filed under: 12c,RAC — mdinh @ 2:14 pm
With 12c, use verbose to display services running.
[oracle@racnode-dc1-1 rac_relocate]$ srvctl -V
srvctl version: 12.1.0.2.0

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status instance -d hawk -i hawk1 -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status instance -d hawk -i hawk2 -v
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

There is option to provide comma delimited list of services to check the status.
Unfortunately, option is not available for relocation which I failed to understand.
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status service -d hawk -s "p11,p12,p13,p14"
Service p11 is running on instance(s) hawk1
Service p12 is running on instance(s) hawk1
Service p13 is running on instance(s) hawk1
Service p14 is running on instance(s) hawk1

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status service -d hawk -s "p21,p22,p23,p24,p25"
Service p21 is running on instance(s) hawk2
Service p22 is running on instance(s) hawk2
Service p23 is running on instance(s) hawk2
Service p24 is running on instance(s) hawk2
Service p25 is running on instance(s) hawk2

Puzzled that status for services is able to use delimited list where as relocation is not.

I have blogged about new features for service failover: 12.1 Improved Service Failover

Another test shows that it’s working as it should be.

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 ~]$ srvctl stop instance -d hawk -instance hawk1 -failover

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$


[root@racnode-dc1-1 ~]# crsctl stop crs
[root@racnode-dc1-1 ~]# crsctl start crs


[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$

[oracle@racnode-dc1-1 ~]$ srvctl start database -d hawk

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$

However, the requirement is to relocate services versus failover.

Here are scripts and demo for that.

Script will only work for 2-nodes RAC and service is running on 1 instance only.

[oracle@racnode-dc1-1 ~]$ srvctl config service -d hawk |egrep 'Service name|instances'
Service name: p11
Preferred instances: hawk1
Available instances: hawk2
Service name: p12
Preferred instances: hawk1
Available instances: hawk2
Service name: p13
Preferred instances: hawk1
Available instances: hawk2
Service name: p14
Preferred instances: hawk1
Available instances: hawk2
Service name: p21
Preferred instances: hawk2
Available instances: hawk1
Service name: p22
Preferred instances: hawk2
Available instances: hawk1
Service name: p23
Preferred instances: hawk2
Available instances: hawk1
Service name: p24
Preferred instances: hawk2
Available instances: hawk1
Service name: p25
Preferred instances: hawk2
Available instances: hawk1
[oracle@racnode-dc1-1 ~]$

DEMO:

[oracle@racnode-dc1-1 rac_relocate]$ ls *relocate*.sh
relocate_service.sh  validate_relocate_service.sh

[oracle@racnode-dc1-1 rac_relocate]$ ls *restore*.sh
restore_service_instance1.sh  restore_service_instance2.sh
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ SAVE SERVICES LOCATION AND PREVENT ACCIDENTAL OVERWRITE
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v > /tmp/service.org

[oracle@racnode-dc1-1 rac_relocate]$ chmod 400 /tmp/service.org; ll /tmp/service.org; cat /tmp/service.org
-r-------- 1 oracle oinstall 222 Jul 18 14:54 /tmp/service.org
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v > /tmp/service.org
-bash: /tmp/service.org: Permission denied
[oracle@racnode-dc1-1 rac_relocate]$

	
========================================================================
+++++++ RELOCATE SERVICES FROM INSTANCE 1 TO 2

Validate is similar to RMAN validate.
No relocation is performed and only syntax is provided for verification.
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ ./validate_relocate_service.sh
./validate_relocate_service.sh: line 4: 1: ---> USAGE: ./validate_relocate_service.sh -db_unique_name -oldinst# -newinst#

[oracle@racnode-dc1-1 rac_relocate]$ ./validate_relocate_service.sh hawk 1 2
+ OUTF=/tmp/service_1.conf
+ srvctl status instance -d hawk -instance hawk1 -v
+ ls -l /tmp/service_1.conf
-rw-r--r-- 1 oracle oinstall 109 Jul 18 14:59 /tmp/service_1.conf
+ cat /tmp/service_1.conf
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
+ set +x

**************************************
***** SERVICES THAT WILL BE RELOCATED:
**************************************
srvctl relocate service -d hawk -service p11 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p12 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p13 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p14 -oldinst hawk1 -newinst hawk2


[oracle@racnode-dc1-1 rac_relocate]$ ./relocate_service.sh hawk 1 2
-rw-r--r-- 1 oracle oinstall 109 Jul 18 15:00 /tmp/service_1.conf
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
+ srvctl relocate service -d hawk -service p11 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p12 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p13 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p14 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl status instance -d hawk -instance hawk1 -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
+ srvctl status instance -d hawk -instance hawk2 -v
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ set +x
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ RELOCATE SERVICES FROM INSTANCE 2 TO 1
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ ./relocate_service.sh hawk 2 1
-rw-r--r-- 1 oracle oinstall 129 Jul 18 15:02 /tmp/service_2.conf
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ srvctl relocate service -d hawk -service p11 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p12 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p13 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p14 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p21 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p22 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p23 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p24 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p25 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl status instance -d hawk -instance hawk2 -v
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.
+ srvctl status instance -d hawk -instance hawk1 -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ set +x
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ RESTORE SERVICES FOR INSTANCE
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ ./restore_service_instance2.sh
./restore_service_instance2.sh: line 4: 1: ---> USAGE: ./restore_service_instance2.sh -db_unique_name

[oracle@racnode-dc1-1 rac_relocate]$ ./restore_service_instance2.sh hawk
+ srvctl relocate service -d hawk -service p21 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p22 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p23 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p24 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p25 -oldinst hawk1 -newinst hawk2
+ set +x

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 rac_relocate]$

CODE:


========================================================================
+++++++ validate_relocate_service.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OLD=${2:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
NEW=${3:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
set -x
OUTF=/tmp/service_${OLD}.conf
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v > $OUTF
ls -l $OUTF;cat $OUTF
set +x
export svc=`tail -1 $OUTF | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
echo
echo "**************************************"
echo "***** SERVICES THAT WILL BE RELOCATED:"
echo "**************************************"
for s in ${svc}
do
echo "srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}${OLD} -newinst ${DB}${NEW}"
done
exit

========================================================================
+++++++ relocate_service.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OLD=${2:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
NEW=${3:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OUTF=/tmp/service_${OLD}.conf
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v > $OUTF
ls -l $OUTF;cat $OUTF
export svc=`tail -1 $OUTF | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}${OLD} -newinst ${DB}${NEW}
set +x
done
set -x
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v
srvctl status instance -d ${DB} -instance ${DB}${NEW} -v
set +x
exit

========================================================================
+++++++ restore_service_instance1.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name"}
export svc=`head -1 /tmp/service.org | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}2 -newinst ${DB}1
set +x
done
exit

========================================================================
+++++++ restore_service_instance2.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name"}
export svc=`tail -1 /tmp/service.org | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}1 -newinst ${DB}2
set +x
done
exit

November 23, 2017

CRS-2674: Start of dbfs_mount failed

Filed under: 12c,GoldenGate,oracle,RAC — mdinh @ 1:04 am

$ crsctl start resource dbfs_mount
CRS-2672: Attempting to start ‘dbfs_mount’ on ‘node2’
CRS-2672: Attempting to start ‘dbfs_mount’ on ‘node1’
CRS-2674: Start of ‘dbfs_mount’ on ‘node1’ failed
CRS-2679: Attempting to clean ‘dbfs_mount’ on ‘node1’
CRS-2674: Start of ‘dbfs_mount’ on ‘node2’ failed
CRS-2679: Attempting to clean ‘dbfs_mount’ on ‘node2’
CRS-2681: Clean of ‘dbfs_mount’ on ‘node1’ succeeded
CRS-2681: Clean of ‘dbfs_mount’ on ‘node2’ succeeded
CRS-4000: Command Start failed, or completed with errors.

Check to make sure DBFS_USER password is not expired.

Next Page »

Create a free website or blog at WordPress.com.