Thinking Out Loud

April 22, 2019

Final Conclusion for 18c Cluster upgrade state is [NORMAL]

Filed under: 18c,RAC — mdinh @ 3:46 am

Finally, I have reached a point that I can live with for Grid 18c upgrade because the process runs to completion without any error and intervention.

Note that ACFS Volume is created in CRS DiskGroup which may not be ideal for production.

Rapid Home Provisioning Server is configured and is not running.

The outcome is different depending on whether the upgrade is performed via GUI or silent as demonstrated 18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Rene Antunez also demonstrates another method UPGRADE ORACLE GI FROM 12.1 TO 18.5 FAILS AND LEAVES CRS WITH STATUS OF UPGRADE FINAL

While we both encountered the same error “Upgrading RHP Repository failed”, we accomplished the same results via different course of action.

The unexplained and unanswered questions is, “Why RHP Repository is being upgraded?”

Ultimately, it is cluvfy that change for cluster upgrade state and this is shown from gridSetupActions2019-04-21_02-10-47AM.log

INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Executing RHPUPGRADE

INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'

INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Executing CLUVFY
INFO: [Apr 21, 2019 2:46:34 AM] Command /u01/18.3.0.0/grid/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all

INFO: [Apr 21, 2019 2:51:37 AM] Completed Plugin named: cvu
INFO: [Apr 21, 2019 2:51:38 AM] ConfigClient.saveSession method called
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'

INFO: [Apr 21, 2019 2:51:38 AM] Successfully executed the flow in SILENT mode
INFO: [Apr 21, 2019 2:51:39 AM] inventory location is/u01/app/oraInventory
INFO: [Apr 21, 2019 2:51:39 AM] Exit Status is 0
INFO: [Apr 21, 2019 2:51:39 AM] Shutdown Oracle Grid Infrastructure 18c Installer

I would suggest to run the last step using GUI if feasible versus silent to see what is happening:

/u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

So how did I get myself into this predicament? I followed blindly. I trust but did not verify.

18.1.0.0 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.4 and later on Oracle Linux (Doc ID 2369422.1)

Step 2.1 - Understand how MGMTDB is handled during upgrade

****************************************************************************************************
Upgrading GI 18.1 does not require upgrading MGMTDB nor does it require installing a MGMTDB if it currently does not exist. 
It's the user's discretion to maintain and upgrade the MGMTDB for their application needs.
****************************************************************************************************

Note: MGMTDB is required when using Rapid Host Provisioning. 
The Cluster Health Monitor functionality will not work without MGMTDB configured.
If you consider to install a MGMTDB later,  it is configured to use 1G of SGA and 500 MB of PGA. 
MGMTDB SGA will not be allocated in hugepages (this is because it's init.ora setting 'use_large_pages' is set to false.

The following parameters from (Doc ID 2369422.1) were the root cause for all the issues in my test cases.

Because MGMTDB is not required, it makes sense to set the following but resulted in chaos.

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

How To Setup a Rapid Home Provisioning (RHP) Server and Client (Doc ID 2097026.1)

Starting with Oracle Grid Infrastructure 18.1.0.0.0, when you install Oracle Grid Infrastructure, the Rapid Home Provisioning Server is configured, by default, in the local mode to support the local switch home capability. 

Rapid Home Provisioning Server is configured, by default and there does not look to be documented or easily found option to not install or bypass default.

RHPS is interchangeable between Server and Service.

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

Here is what worked from end to end without any failure or invention.
The response file was ***not*** modified for each of the test cases.

/u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs \
-applyRU /media/patch/Jan2019/28828717 \
-responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

Here is what the environment looks like after the 18c GI upgrade.

Notice ACFS is configured for RHP.

[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
+ exit

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc1-1 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.chad
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-1            169.254.7.214 172.16
                                                             .9.10,STABLE
ora.asm
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc1-1            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.rhpserver
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ srvctl config mgmtdb -all
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
  /u01/18.3.0.0/grid on node racnode-dc1-1
Oracle user: oracle
Spfile: +CRS/_MGMTDB/PARAMETERFILE/spfile.271.1006137461
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: GIMR_DSCREP_10
PDB service: GIMR_DSCREP_10
Cluster name: vbox-rac-dc1
Management database is enabled.
Management database is individually enabled on nodes:
Management database is individually disabled on nodes:
Database instance: -MGMTDB

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.crs.ghchkpt.acfs -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w 'TYPE = ora.acfs.type' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init
NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1

[oracle@racnode-dc1-1 ~]$ mount|egrep -i 'asm|ghchkpt'
oracleasmfs on /dev/oracleasm type oracleasmfs (rw,relatime)

[oracle@racnode-dc1-1 ~]$ acfsutil version
acfsutil version: 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ acfsutil registry
Mount Object:
  Device: /dev/asm/ghchkpt-61
  Mount Point: /opt/oracle/rhp_images/chkbase
  Disk Group: CRS
  Volume: GHCHKPT
  Options: none
  Nodes: all
  Accelerator Volumes:

[oracle@racnode-dc1-1 ~]$ acfsutil info fs
acfsutil info fs: ACFS-03036: no mounted ACFS file systems

[oracle@racnode-dc1-1 ~]$ acfsutil info storage
Diskgroup      Consumer      Space     Size With Mirroring  Usable Free  %Free   Path
CRS                          59.99              59.99          34.95       58%
DATA                         99.99              99.99          94.76       94%
FRA                          59.99              59.99          59.43       99%
----
unit of measurement: GB

[root@racnode-dc1-1 ~]# srvctl start filesystem -device /dev/asm/ghchkpt-61
PRCA-1138 : failed to start one or more file system resources:
CRS-2501: Resource 'ora.crs.ghchkpt.acfs' is disabled
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ asmcmd -V
asmcmd version 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_diskoting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    35784                0           35784                        Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304    102396    97036                0           97036                        N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    60856                0           60856                        N  FRA/

[oracle@racnode-dc1-1 ~]$ srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is not running

[oracle@racnode-dc1-1 ~]$ ps -ef|grep [p]mon
oracle    3571     1  0 02:40 ?        00:00:03 mdb_pmon_-MGMTDB
oracle   17109     1  0 Apr20 ?        00:00:04 asm_pmon_+ASM1
oracle   17531     1  0 Apr20 ?        00:00:06 ora_pmon_hawk1
[oracle@racnode-dc1-1 ~]$

Let me show you how this is convoluted.
In my case, it’s easy because there were only 2 actions performed.
Do you know what GridSetupAction was performed based on the directory name?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 18:59 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 18:56 GridSetupActions2019-04-21_02-10-47AM

This is how you can find out.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 19:20 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 19:22 GridSetupActions2019-04-21_02-10-47AM

================================================================================
### gridSetup.sh -silent -skipPrereqs -applyRU
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ ll
total 13012
-rw-r----- 1 oracle oinstall   20562 Apr 20 19:09 AttachHome2019-04-20_06-51-48PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall       0 Apr 20 18:59 gridSetupActions2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall 7306374 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall 2374182 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall 3582408 Apr 20 18:59 installerPatchActions_2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall       0 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall       0 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall     157 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall      29 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.out.racnode-dc1-2
drwxrwx--- 2 oracle oinstall    4096 Apr 20 19:01 temp_ob
-rw-r----- 1 oracle oinstall   12467 Apr 20 19:09 time2019-04-20_06-51-48PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep ROOTSH_LOCATION gridSetupActions2019-04-20_06-51-48PM.log
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/rootupgrade.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep "Execute Root Scripts successful" time2019-04-20_06-51-48PM.log
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914

================================================================================
### gridSetup.sh -executeConfigTools -silent
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ ll
total 1116
-rw-r----- 1 oracle oinstall       0 Apr 21 02:10 gridSetupActions2019-04-21_02-10-47AM.err
-rw-r----- 1 oracle oinstall  122568 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall 1004378 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.out
-rw-r----- 1 oracle oinstall     129 Apr 21 02:10 installerPatchActions_2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall    3155 Apr 21 02:51 time2019-04-21_02-10-47AM.log

oracle@racnode-dc1-1:hawk1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep rhprepos *
gridSetupActions2019-04-21_02-10-47AM.log:INFO:  [Apr 21, 2019 2:45:37 AM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep executeSelectedTools gridSetupActions2019-04-21_02-10-47AM.log
INFO:  [Apr 21, 2019 2:11:37 AM] Entering ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate oAggregate=oracle.crs:oracle.crs:18.0.0.0.0:common
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate action assigned
INFO:  [Apr 21, 2019 2:51:38 AM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 21, 2019 2:51:38 AM] Exiting ConfigClient.executeSelectedToolsInAggregate method

It might be better to use GUI if available but be careful.

For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.

I was using X and connection was lost during the upgrade. It was a kiss of death with this being the last screen capture.

Rene’s quote:

After looking for information in MOS, there wasn’t much that could lead me on how to solve the issue, just a lot of bugs related to the RHP repository.

I was lucky enough to get on a call with a good friend (@_rickgonzalez ) who is the PM of the RHP and we were able to work through it. So below is what I was able to do to solve the issue.

Also it was confirmed by them , that this is a bug in the upgrade process of 18.X, so hopefully they will be fixing it soon.

I concur and conclude, the process for GI 18c Upgrade is overly complicated, convoluted, contradicting, and not clearly documented, all having to do with MGMTDB and Rapid Home Provisioning (RHP) repository.

Unless you’re lucky or know someone, good luck with your upgrade.

Lastly, it would be greatly appreciated if you would share your upgrade experiences and/or results.

Did you use GUI or silent?

Advertisements

April 16, 2019

18c Upgrade: Failed gridSetup.sh -executeConfigTools: Cluster upgrade state is [UPGRADE FINAL]

Filed under: 18c,RAC — mdinh @ 9:53 pm

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

This is a multi-part series for 18c Upgrade and suggest read the above 2 posts first.

Commands for gridSetup.sh

+ /u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs -applyRU /media/patch/Jan2019/28828717 -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false
Preparing the home to patch...
Applying the patch /media/patch/Jan2019/28828717...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/installerPatchActions_2019-04-16_06-19-12AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/18.3.0.0/grid/install/response/grid_2019-04-16_06-19-12AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/gridSetupActions2019-04-16_06-19-12AM.log

As a root user, execute the following script(s):
        1. /u01/18.3.0.0/grid/rootupgrade.sh

Execute /u01/18.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc1-1, racnode-dc1-2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp [-silent]


+ exit
oracle@racnode-dc1-1::/home/oracle
$

Basically, the error provided is utterly useless.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
oracle@racnode-dc1-1::/home/oracle

Check logs from directory /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cd /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ ls -alrt
total 1072
-rw-r----- 1 oracle oinstall     130 Apr 16 12:59 installerPatchActions_2019-04-16_12-59-56PM.log
-rw-r----- 1 oracle oinstall       0 Apr 16 12:59 gridSetupActions2019-04-16_12-59-56PM.err
drwxrwx--- 8 oracle oinstall    4096 Apr 16 13:01 ..
-rw-r----- 1 oracle oinstall 1004378 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.out
-rw-r----- 1 oracle oinstall    2172 Apr 16 13:01 time2019-04-16_12-59-56PM.log ***
-rw-r----- 1 oracle oinstall   73047 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.log ***
drwxrwx--- 2 oracle oinstall    4096 Apr 16 13:01 .

Check time2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ cat time2019-04-16_12-59-56PM.log
 # Message # ElapsedTime # Current Time ( ms )
 # Starting step:INITIALIZE_ACTION of state:init #  0  # 1555412405106
 # Finished step:INITIALIZE_ACTION of state:init # 1 # 1555412405106
 # Starting step:EXECUTE of state:init #  0  # 1555412405108
 # Finished step:EXECUTE of state:init # 3 # 1555412405111
 # Starting step:VALIDATE of state:init #  0  # 1555412405113
 # Finished step:VALIDATE of state:init # 2 # 1555412405115
 # Starting step:TRANSITION of state:init #  0  # 1555412405115
 # Finished step:TRANSITION of state:init # 2 # 1555412405117
 # Starting step:EXECUTE of state:CRSConfigTools #  0  # 1555412405117
 # Finished step:EXECUTE of state:CRSConfigTools # 813 # 1555412405930
 # Starting step:VALIDATE of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:VALIDATE of state:CRSConfigTools # 0 # 1555412405930
 # Starting step:TRANSITION of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:TRANSITION of state:CRSConfigTools # 26591 # 1555412432521
 # Starting step:INITIALIZE_ACTION of state:setup #  0  # 1555412432521
 # Finished step:INITIALIZE_ACTION of state:setup # 0 # 1555412432521
 # Starting step:EXECUTE of state:setup #  0  # 1555412432522
 # Finished step:EXECUTE of state:setup # 6 # 1555412432528
 # Configuration in progress. #  0  # 1555412436788
 # Update Inventory in progress. #  0  # 1555412437768
 # Update Inventory successful. # 52612 # 1555412490380
 # Upgrading RHP Repository in progress. #  0  # 1555412490445

================================================================================
 # Upgrading RHP Repository failed. # 12668 # 1555412503112
================================================================================

 # Starting step:VALIDATE of state:setup #  0  # 1555412503215
 # Finished step:VALIDATE of state:setup # 15 # 1555412503230
 # Starting step:TRANSITION of state:setup #  0  # 1555412503230
 # Finished step:TRANSITION of state:setup # 0 # 1555412503230
 # Starting step:EXECUTE of state:finish #  0  # 1555412503230
 # Finished step:EXECUTE of state:finish # 6 # 1555412503236
 # Starting step:VALIDATE of state:finish #  0  # 1555412503237
 # Finished step:VALIDATE of state:finish # 1 # 1555412503238
 # Starting step:TRANSITION of state:finish #  0  # 1555412503238
 # Finished step:TRANSITION of state:finish # 0 # 1555412503238

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

Check gridSetupActions2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -B2 -A100 'Executing RHPUPGRADE' gridSetupActions2019-04-16_12-59-56PM.log
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn.handleProcess() entered.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: getting configAssistantParmas.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: checking secretArguments.
INFO:  [Apr 16, 2019 1:01:30 PM] No arguments to pass to stdin
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: starting read loop.
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 16, 2019 1:01:43 PM] Exiting ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus SUCCESS_MINUS_RECTOOL to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Calling event ConfigSessionEnding
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.endSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Configuration
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus FAILURE to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] All forked task are completed at state setup
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <setup>

================================================================================
WARNING:  [Apr 16, 2019 1:01:43 PM] [WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
================================================================================

INFO:  [Apr 16, 2019 1:01:43 PM] Advice is CONTINUE
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <setup>
INFO:  [Apr 16, 2019 1:01:43 PM] Verifying route success
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Executing action at state finish
INFO:  [Apr 16, 2019 1:01:43 PM] FinishAction Actions.execute called
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] Completed executing action at state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Moved to state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <finish>
WARNING:  [Apr 16, 2019 1:01:43 PM] Validation disabled for the state finish
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Terminating all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Terminated all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Successfully executed the flow in SILENT mode
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] inventory location is/u01/app/oraInventory
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application

================================================================================
INFO:  [Apr 16, 2019 1:01:43 PM] Exit Status is -1
INFO:  [Apr 16, 2019 1:01:43 PM] Shutdown Oracle Grid Infrastructure 18c Installer
INFO:  [Apr 16, 2019 1:01:43 PM] Unloading Setup Driver
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

Due to Exit Status is -1 is probably why – The cluster upgrade state is [UPGRADE FINAL]

Why Upgrading RHP Repository when oracle_install_crs_ConfigureRHPS=false?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -i rhp *
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:04 PM] Setting value for the property:oracle_install_crs_ConfigureRHPS in the bean:CRSInstallSettings
gridSetupActions2019-04-16_12-59-56PM.log: oracle_install_crs_ConfigureRHPS                       false
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Created config job for rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Selecting job named 'Upgrading RHP Repository' for retry
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Started Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository in progress. #  0  # 1555412490445
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository failed. # 12668 # 1555412503112
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

ora.cvu does not report any errors.

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

Run rhprepos upgradeSchema -fromversion 12.1.0.2.0 – FAILED.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
PRCT-1474 : failed to run 'mgmtca' on node racnode-dc1-2.

oracle@racnode-dc1-1::/home/oracle
$ ps -ef|grep pmon
oracle    9722  4804  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle   10380     1  0 13:46 ?        00:00:01 asm_pmon_+ASM1
oracle   10974     1  0 13:46 ?        00:00:01 apx_pmon_+APX1
oracle   11218     1  0 13:47 ?        00:00:02 ora_pmon_hawk1
oracle@racnode-dc1-1::/home/oracle
$ ssh racnode-dc1-2
Last login: Tue Apr 16 18:44:30 2019

----------------------------------------
Welcome to racnode-dc1-2
OracleLinux 7.3 x86_64

FQDN: racnode-dc1-2.internal.lab
IP:   10.0.2.15

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5709 MB
Kernel:    4.1.12-61.1.18.el7uek.x86_64

----------------------------------------

oracle@racnode-dc1-2::/home/oracle
$ ps -ef|grep pmon
oracle    9219     1  0 13:44 ?        00:00:01 asm_pmon_+ASM2
oracle   10113     1  0 13:45 ?        00:00:01 apx_pmon_+APX2
oracle   10619     1  0 13:45 ?        00:00:01 ora_pmon_hawk2
oracle   13200 13178  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle@racnode-dc1-2::/home/oracle
$

In conclusion, the silent upgrade process is poorly documented at best.

Starting to wondering if the following parameters contributed to the issue:

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

April 15, 2019

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Filed under: 18c,RAC — mdinh @ 12:54 am

There are/were a lot of discussions about Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]
on how cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade could have changed the cluster upgrade state to [NORMAL].

Running gridSetup.sh -executeConfigTools in silent mode, the next step cluvfy is not run.

[oracle@racnode-dc1-1 ~]$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-15_01-02-06AM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
[oracle@racnode-dc1-1 ~]$

Running gridSetup.sh -executeConfigTools in GUI, there is an option to ignore Failed Upgrading RHP Repository and continue to the next step to run cluvfy.

I don’t think cluvfy modify the state of the cluster but rather ora.cvu did due to the existing of the following files.

[root@racnode-dc1-1 install]# pwd
/u01/app/oracle/crsdata/@global/cvu/baseline/install
[root@racnode-dc1-1 install]# ll
total 36000
-rw-r--r-- 1 oracle oinstall 35958465 Apr 14 06:05 grid_install_12.1.0.2.0.xml
-rw-r--r-- 1 oracle oinstall   901803 Apr 15 01:42 grid_install_18.0.0.0.0.zip
[root@racnode-dc1-1 install]# 

When checking RESULTS from ora.cvu, there are no errors.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-1 ~]$
[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
[oracle@racnode-dc1-1 ~]$ 

Hell! What do I know as I am just a RAC novice and happy the cluster state is what it should be.

gridsetup_upgrade.rsp used for upgrade.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

April 14, 2019

Update Override OPatch

Filed under: awk_sed_grep — mdinh @ 1:47 pm

Framework to source GI/DB RAC environment stored on shared volume.

[oracle@racnode-dc1-2 patch]$ df -h |grep patch
media_patch              3.7T  442G  3.3T  12% /media/patch

[oracle@racnode-dc1-2 patch]$ ps -ef|grep pmon
oracle    3268  2216  0 15:37 pts/0    00:00:00 grep --color=auto pmon
oracle   11254     1  0 06:33 ?        00:00:02 ora_pmon_hawk2
oracle   19995     1  0 05:52 ?        00:00:02 asm_pmon_+ASM2

[oracle@racnode-dc1-2 patch]$ cat /etc/oratab
+ASM2:/u01/app/12.1.0.1/grid:N
hawk2:/u01/app/oracle/12.1.0.1/db1:N

[oracle@racnode-dc1-2 patch]$ cat gi.env
### Michael Dinh : Mar 26, 2019
### Source RAC GI environment
### Prerequisites for hostname: last char from hostname must be digit
### Allow: prodhost01, racnode-dc1-1
### DisAllow: prod01host
set +x
unset ORACLE_UNQNAME
ORAENV_ASK=NO
h=$(hostname -s)
### Extract last character from hostname to create ORACLE_SID
export ORACLE_SID=+ASM${h:${#h} - 1}
. oraenv <<< $ORACLE_SID
export GRID_HOME=$ORACLE_HOME
env|egrep 'ORA|GRID'
sysresv|tail -1

[oracle@racnode-dc1-2 patch]$ cat hawk.env
### Michael Dinh : Mar 26, 2019
### Source RAC DB environment
### Prerequisites for hostname: last char from hostname must be digit
### Allow: prodhost01, racnode-dc1-1
### DisAllow: prod01host
set +x
unset GRID_HOME
h=$(hostname -s)
### Extract filename without extension (.env)
ORAENV_ASK=NO
export ORACLE_UNQNAME=$(basename $BASH_SOURCE .env)
### Extract last character from hostname to create ORACLE_SID
export ORACLE_SID=$ORACLE_UNQNAME${h:${#h} - 1}
. oraenv <<< $ORACLE_SID
env|egrep 'ORA|GRID'
sysresv|tail -1
[oracle@racnode-dc1-2 patch]$

update_opatch.sh

#!/bin/sh -x
update_opatch()
{
set -ex
cd $ORACLE_HOME
$ORACLE_HOME/OPatch/opatch version
unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip ; echo $?
$ORACLE_HOME/OPatch/opatch version
}
ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
. /media/patch/gi.env
update_opatch
. /media/patch/hawk.env
update_opatch
exit

Run update_opatch.sh

[oracle@racnode-dc1-1 patch]$ ./update_opatch.sh
+ ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
-rwxrwxrwx 1 vagrant vagrant 107M Feb  1 22:08 /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ . /media/patch/gi.env
++ set +x
The Oracle base has been changed from hawk1 to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"
+ cd /u01/app/12.1.0.1/grid
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ cd /u01/app/oracle/12.1.0.1/db1
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 patch]$


[oracle@racnode-dc1-2 patch]$ ./update_opatch.sh
+ ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
-rwxrwxrwx 1 vagrant vagrant 107M Feb  1 22:08 /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"
+ cd /u01/app/12.1.0.1/grid
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ cd /u01/app/oracle/12.1.0.1/db1
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ exit
[oracle@racnode-dc1-2 patch]$

Create Linux Swap File

Filed under: shell scripting,Vagrant,VirtualBox — mdinh @ 1:26 pm

Currently I am using oravirt (Mikael Sandström) · GitHub (https://github.com/oravirt) vagrant boxes.

The swap is too small, wanted to increase for 18c upgrade test, tired of doing this manually, and here’s a script for that.

#!/bin/sh -x
swapon --show
free -h
rm -fv /swapfile1
dd if=/dev/zero of=/swapfile1 bs=1G count=16
ls -lh /swapfile?
chmod 0600 /swapfile1
mkswap /swapfile1
swapon /swapfile1
swapon --show
free -h
echo "/root/swapfile1         swap                    swap    defaults        0 0" >> /etc/fstab
cat /etc/fstab
exit

Script in action:

[root@racnode-dc1-2 patch]# ./mkswap.sh
+ swapon --show
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition   2G  33M   -1
+ free -h
              total        used        free      shared  buff/cache   available
Mem:           5.6G        4.0G        114M        654M        1.4G        779M
Swap:          2.0G         33M        2.0G
+ rm -fv /swapfile1
+ dd if=/dev/zero of=/swapfile1 bs=1G count=16
16+0 records in
16+0 records out
17179869184 bytes (17 GB) copied, 42.7352 s, 402 MB/s
+ ls -lh /swapfile1
-rw-r--r-- 1 root root 16G Apr 14 15:18 /swapfile1
+ chmod 0600 /swapfile1
+ mkswap /swapfile1
Setting up swapspace version 1, size = 16777212 KiB
no label, UUID=b084bd5d-e32e-4c15-974f-09f505a0cedc
+ swapon /swapfile1
+ swapon --show
NAME       TYPE      SIZE   USED PRIO
/dev/dm-1  partition   2G 173.8M   -1
/swapfile1 file       16G     0B   -2
+ free -h
              total        used        free      shared  buff/cache   available
Mem:           5.6G        3.9G        1.0G        189M        657M        1.3G
Swap:           17G        173M         17G
+ echo '/root/swapfile1         swap                    swap    defaults        0 0'
+ cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue Apr 18 08:50:14 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol-root     /                       xfs     defaults        0 0
UUID=ed2996e5-e077-4e23-83a5-10418226a725 /boot                   xfs     defaults        0 0
/dev/mapper/ol-swap     swap                    swap    defaults        0 0
/dev/vgora/lvora /u01 ext4 defaults 1 2
/root/swapfile1         swap                    swap    defaults        0 0
+ exit
[root@racnode-dc1-2 patch]#

April 13, 2019

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

Filed under: 18c,RAC — mdinh @ 11:13 pm

After upgrade and apply RU for Grid 18c, the cluster upgrade state was not NORMAL.

The cluster upgrade state is [UPGRADE FINAL] which I have never seen before.

Searching Oracle Support was useless as I was only able to find the following states:

The cluster upgrade state is [NORMAL]
The cluster upgrade state is [FORCED]
The cluster upgrade state is [ROLLING PATCH]

The following checks were performed after upgrade:

[oracle@racnode-dc1-1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].

[oracle@racnode-dc1-1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].

[oracle@racnode-dc1-1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
[oracle@racnode-dc1-1 ~]$

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-1 ~]#


[oracle@racnode-dc1-2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[oracle@racnode-dc1-2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]

[oracle@racnode-dc1-2 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2532936542].

[oracle@racnode-dc1-2 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].

[oracle@racnode-dc1-2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

[root@racnode-dc1-2 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-2 ~]#

Check OCR: Grid Infrastructure Upgrade : The cluster upgrade state is [FORCED] (Doc ID 2482606.1)
I was desperate and OCR was fine.

[root@racnode-dc1-1 ~]# olsnodes -c
vbox-rac-dc1

[root@racnode-dc1-1 ~]# olsnodes -t -a -s -n
racnode-dc1-1   1       Active  Hub     Unpinned
racnode-dc1-2   2       Active  Hub     Unpinned

[root@racnode-dc1-1 ~]# $GRID_HOME/bin/ocrdump /tmp/ocrdump.txt

[root@racnode-dc1-1 ~]# grep SYSTEM.version.hostnames /tmp/ocrdump.txt
[SYSTEM.version.hostnames]
[SYSTEM.version.hostnames.racnode-dc1-1]
[SYSTEM.version.hostnames.racnode-dc1-1.patchlevel]
[SYSTEM.version.hostnames.racnode-dc1-1.site]
[SYSTEM.version.hostnames.racnode-dc1-2]
[SYSTEM.version.hostnames.racnode-dc1-2.patchlevel]
[SYSTEM.version.hostnames.racnode-dc1-2.site]
[root@racnode-dc1-1 ~]#

Thanks to my friend Vlatko J. https://twitter.com/jvlatko

Run cluvfy:

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ which cluvfy
/u01/18.3.0.0/grid/bin/cluvfy

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade

Baseline collected.
Collection report for this execution is saved in file "/u01/app/oracle/crsdata/@global/cvu/baseline/install/grid_install_18.0.0.0.0.zip".

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Apr 13, 2019 11:05:58 PM
CVU home:                     /u01/18.3.0.0/grid/
User:                         oracle
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

After running cluvfy, the cluster upgrade state is [NORMAL].

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-1 ~]#

[root@racnode-dc1-2 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-2 ~]#

March 24, 2019

Playing with ACFS

Filed under: 12c,ACFS — mdinh @ 4:50 pm

Kernel version is 4.1.12-94.3.9.el7uek.x86_64 vs ACFS-9325: Driver OS kernel version = 4.1.12-32.el7uek.x86_64 because kernel was upgraded and ACFS has not been reconfigured after kernel upgrade.

[root@racnode-dc1-1 ~]# uname -r
4.1.12-94.3.9.el7uek.x86_64

[root@racnode-dc1-1 ~]# lsmod | grep oracle
oracleacfs           3719168  2
oracleadvm            606208  7
oracleoks             516096  2 oracleacfs,oracleadvm
oracleasm              57344  1

[root@racnode-dc1-1 ~]# modinfo oracleoks
filename:       /lib/modules/4.1.12-94.3.9.el7uek.x86_64/weak-updates/usm/oracleoks.ko
author:         Oracle Corporation
license:        Proprietary
srcversion:     3B8116031A3907D0FFFC8E1
depends:
vermagic:       4.1.12-32.el7uek.x86_64 SMP mod_unload modversions
signer:         Oracle Linux Kernel Module Signing Key
sig_key:        2B:B3:52:41:29:69:A3:65:3F:0E:B6:02:17:63:40:8E:BB:9B:B5:AB
sig_hashalgo:   sha512

[root@racnode-dc1-1 ~]# acfsdriverstate version
ACFS-9325:     Driver OS kernel version = 4.1.12-32.el7uek.x86_64(x86_64).
ACFS-9326:     Driver Oracle version = 181010.

[root@racnode-dc1-1 ~]# acfsdriverstate installed
ACFS-9203: true

[root@racnode-dc1-1 ~]# acfsdriverstate supported
ACFS-9200: Supported

[root@racnode-dc1-1 ~]# acfsroot version_check
ACFS-9316: Valid ADVM/ACFS distribution media detected at: '/u01/app/12.1.0.1/grid/usm/install/Oracle/EL7UEK/x86_64/4.1.12/4.1.12-x86_64/bin'

[root@racnode-dc1-1 ~]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# acfsutil registry
Mount Object:
  Device: /dev/asm/acfs_vol-256
  Mount Point: /ggdata02
  Disk Group: DATA
  Volume: ACFS_VOL
  Options: none
  Nodes: all
[root@racnode-dc1-1 ~]# acfsutil info fs
/ggdata02
    ACFS Version: 12.1.0.2.0
    on-disk version:       39.0
    flags:        MountPoint,Available
    mount time:   Mon Mar 25 16:24:58 2019
    allocation unit:       4096
    volumes:      1
    total size:   10737418240  (  10.00 GB )
    total free:   10569035776  (   9.84 GB )
    file entry table allocation: 49152
    primary volume: /dev/asm/acfs_vol-256
        label:
        state:                 Available
        major, minor:          248, 131073
        size:                  10737418240  (  10.00 GB )
        free:                  10569035776  (   9.84 GB )
        metadata read I/O count:         1087
        metadata write I/O count:        11
        total metadata bytes read:       556544  ( 543.50 KB )
        total metadata bytes written:    12800  (  12.50 KB )
        ADVM diskgroup         DATA
        ADVM resize increment: 536870912
        ADVM redundancy:       unprotected
        ADVM stripe columns:   8
        ADVM stripe width:     1048576
    number of snapshots:  0
    snapshot space usage: 0  ( 0.00 )
    replication status: DISABLED
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ cluvfy comp acfs -n all -f /ggdata02 -verbose

Verifying ACFS Integrity
Task ASM Integrity check started...


Starting check to see if ASM is running on all cluster nodes...

ASM Running check passed. ASM is running on all specified nodes

Confirming that at least one ASM disk group is configured...
Disk Group Check passed. At least one Disk Group configured

Task ASM Integrity check passed...

Task ACFS Integrity check started...

Checking shared storage accessibility...

"/ggdata02" is shared


Shared storage check was successful on nodes "racnode-dc1-1,racnode-dc1-2"

Task ACFS Integrity check passed

UDev attributes check for ACFS started...
Result: UDev attributes check passed for ACFS


Verification of ACFS Integrity was successful.
[oracle@racnode-dc1-1 ~]$

Gather ACFS Volume Info:

[oracle@racnode-dc1-1 ~]$ asmcmd volinfo –all

Diskgroup Name: DATA

         Volume Name: ACFS_VOL
         Volume Device: /dev/asm/acfs_vol-256
         State: ENABLED
         Size (MB): 10240
         Resize Unit (MB): 512
         Redundancy: UNPROT
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /ggdata02

Gather ACFS info using resource name:

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init

NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1

From (asmcmd volinfo –all): Diskgroup Name: DATA 

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.dg -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------

From (asmcmd volinfo –all): Diskgroup Name: DATA and Volume Name: ACFS_VOL

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.ACFS_VOL.advm -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.ACFS_VOL.advm
               ONLINE  ONLINE       racnode-dc1-1            Volume device /dev/asm/acfs_vol-256 
                                                             is online,STABLE
               ONLINE  ONLINE       racnode-dc1-2            Volume device /dev/asm/acfs_vol-256 
                                                             is online,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.ACFS_VOL.acfs -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.data.acfs_vol.acfs
               ONLINE  ONLINE       racnode-dc1-1            mounted on /ggdata02,STABLE
               ONLINE  ONLINE       racnode-dc1-2            mounted on /ggdata02,STABLE
--------------------------------------------------------------------------------

Gather ACFS info using resource type:

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.volume.type’

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.ACFS_VOL.advm
               ONLINE  ONLINE       racnode-dc1-1            Volume device /dev/asm/acfs_vol-256 
			                                                 is online,STABLE
               ONLINE  ONLINE       racnode-dc1-2            Volume device /dev/asm/acfs_vol-256 
			                                                 is online,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.acfs.type’

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.data.acfs_vol.acfs
               ONLINE  ONLINE       racnode-dc1-1            mounted on /ggdata02,STABLE
               ONLINE  ONLINE       racnode-dc1-2            mounted on /ggdata02,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.diskgroup.type’

--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.DATA.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.FRA.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
--------------------------------------------------------------------------------

March 16, 2019

Playing with oracleasm and ASMLib

Filed under: 12c,ASM,awk_sed_grep — mdinh @ 12:02 am

Forgot about script I wrote some time ago: Be Friend With awk/sed | ASM Mapping

[root@racnode-dc1-1 ~]# cat /sf_working/scripts/asm_mapping.sh
#!/bin/sh -e
for disk in `/etc/init.d/oracleasm listdisks`
do
oracleasm querydisk -d $disk
#ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
# Alternate option to remove []
ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed 's/[][]//g'|awk -F, '{print $1 ",.*" $2}'`
echo
done

[root@racnode-dc1-1 ~]# /sf_working/scripts/asm_mapping.sh
Disk "CRS01" is a valid ASM disk on device [8,33]
brw-rw---- 1 root    disk      8,  33 Mar 16 10:25 /dev/sdc1

Disk "DATA01" is a valid ASM disk on device [8,49]
brw-rw---- 1 root    disk      8,  49 Mar 16 10:25 /dev/sdd1

Disk "FRA01" is a valid ASM disk on device [8,65]
brw-rw---- 1 root    disk      8,  65 Mar 16 10:25 /dev/sde1

[root@racnode-dc1-1 ~]#

HOWTO: Which Disks Are Handled by ASMLib Kernel Driver? (Doc ID 313387.1)

[root@racnode-dc1-1 ~]# oracleasm listdisks
CRS01
DATA01
FRA01

[root@racnode-dc1-1 dev]# ls -l /dev/oracleasm/disks
total 0
brw-rw---- 1 oracle dba 8, 33 Mar 15 10:46 CRS01
brw-rw---- 1 oracle dba 8, 49 Mar 15 10:46 DATA01
brw-rw---- 1 oracle dba 8, 65 Mar 15 10:46 FRA01

[root@racnode-dc1-1 dev]# ls -l /dev | grep -E '33|49|65'|grep -E '8'
brw-rw---- 1 root    disk      8,  33 Mar 15 23:47 sdc1
brw-rw---- 1 root    disk      8,  49 Mar 15 23:47 sdd1
brw-rw---- 1 root    disk      8,  65 Mar 15 23:47 sde1

[root@racnode-dc1-1 dev]# /sbin/blkid | grep oracleasm
/dev/sde1: LABEL="FRA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="205115d9-730d-4f64-aedd-d3886e73d123"
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"
/dev/sdc1: LABEL="CRS01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="232e214d-07bb-4f36-aba8-fb215437fb7e"
[root@racnode-dc1-1 dev]#

Various commands to retrieved oracleasm info and more.

[root@racnode-dc1-1 ~]# cat /etc/oracle-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# cat /etc/system-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# uname -r
4.1.12-61.1.18.el7uek.x86_64

[root@racnode-dc1-1 ~]# rpm -q oracleasm-`uname -r`
package oracleasm-4.1.12-61.1.18.el7uek.x86_64 is not installed

[root@racnode-dc1-1 ~]# rpm -qa |grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# oracleasm -V
oracleasm version 2.1.9

[root@racnode-dc1-1 ~]# oracleasm -h
Usage: oracleasm [--exec-path=<exec_path>] <command> [ <args> ]
       oracleasm --exec-path
       oracleasm -h
       oracleasm -V

The basic oracleasm commands are:
    configure        Configure the Oracle Linux ASMLib driver
    init             Load and initialize the ASMLib driver
    exit             Stop the ASMLib driver
    scandisks        Scan the system for Oracle ASMLib disks
    status           Display the status of the Oracle ASMLib driver
    listdisks        List known Oracle ASMLib disks
    querydisk        Determine if a disk belongs to Oracle ASMlib
    createdisk       Allocate a device for Oracle ASMLib use
    deletedisk       Return a device to the operating system
    renamedisk       Change the label of an Oracle ASMlib disk
    update-driver    Download the latest ASMLib driver

[root@racnode-dc1-1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oracle]:
Default group to own the driver interface [dba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

[root@racnode-dc1-1 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=dba
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

[root@racnode-dc1-1 ~]# cat /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#

# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER=""

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""

# ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
# reported by the underlying disk instead of the physical. The default
# is 'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false

[root@racnode-dc1-1 ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

[root@racnode-dc1-1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@racnode-dc1-1 ~]# oracleasm querydisk -d DATA01
Disk "DATA01" is a valid ASM disk on device [8,49]

[root@racnode-dc1-1 ~]# oracleasm querydisk -p DATA01
Disk "DATA01" is a valid ASM disk
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"

[root@racnode-dc1-1 ~]# oracleasm-discover
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]
Discovered disk: ORCL:CRS01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:DATA01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:FRA01 [104853504 blocks (53684994048 bytes), maxio 1024]

[root@racnode-dc1-1 ~]# lsmod | grep oracleasm
oracleasm              57344  1

[root@racnode-dc1-1 ~]# modinfo oracleasm
filename:       /lib/modules/4.1.12-61.1.18.el7uek.x86_64/kernel/drivers/block/oracleasm/oracleasm.ko
description:    Kernel driver backing the Generic Linux ASM Library.
author:         Joel Becker, Martin K. Petersen <martin.petersen@oracle.com>
version:        2.0.8
license:        GPL
srcversion:     4B3524FDA590726E8D378CB
depends:
intree:         Y
vermagic:       4.1.12-61.1.18.el7uek.x86_64 SMP mod_unload modversions
signer:         Oracle CA Server
sig_key:        AC:74:F5:41:96:B5:9D:EB:61:BA:02:F9:C2:02:8C:9C:E5:94:53:06
sig_hashalgo:   sha512
parm:           use_logical_block_size:Prefer logical block size over physical (Y=logical, N=physical [default]) (bool)

[root@racnode-dc1-1 ~]# ls -la /etc/sysconfig/oracleasm
lrwxrwxrwx 1 root root 24 Mar  5 20:21 /etc/sysconfig/oracleasm -> oracleasm-_dev_oracleasm

[root@racnode-dc1-1 ~]# rpm -qa | grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# rpm -qi oracleasmlib-2.0.4-1.el6.x86_64
Name        : oracleasmlib
Version     : 2.0.4
Release     : 1.el6
Architecture: x86_64
Install Date: Tue 18 Apr 2017 10:56:40 AM CEST
Group       : System Environment/Kernel
Size        : 27192
License     : Oracle Corporation
Signature   : RSA/SHA256, Mon 26 Mar 2012 10:22:51 PM CEST, Key ID 72f97b74ec551f03
Source RPM  : oracleasmlib-2.0.4-1.el6.src.rpm
Build Date  : Mon 26 Mar 2012 10:22:44 PM CEST
Build Host  : ca-build44.us.oracle.com
Relocations : (not relocatable)
Packager    : Joel Becker <joel.becker@oracle.com>
Vendor      : Oracle Corporation
URL         : http://oss.oracle.com/
Summary     : The Oracle Automatic Storage Management library userspace code.
Description :
The Oracle userspace library for Oracle Automatic Storage Management
[root@racnode-dc1-1 ~]#

References for ASMLib

Do you need asmlib?

Oracleasmlib Not Necessary

March 10, 2019

Oracle Resources: VirtualBox, Vagrant, Linux, Docker, Database, Clusterware, GoldenGate, and More

Filed under: oracle — mdinh @ 2:08 am

Oracle Clusterware

Oracle VM VirtualBox

Oracle Linux Vagrant boxes (Might consider building a customized one.)

Oracle Vagrant configuration

Oracle Linux Download

Official Docker configurations

Oracle Database on Docker

Oracle GoldenGate on Docker

Oracle Linux Images for Hands-On Labs

Pre-Built Developer VMs for Oracle VM VirtualBox

VM Virtual Box for Oracle Enterprise Manager Cloud Control 13c Release 1 (13.1.0.0)

VM Virtual Box for Oracle Enterprise Manager Cloud Control 13c Release 2 (13.2.0.0)

Oracle Enterprise Manager Downloads

 

March 4, 2019

Thank You ALL

Filed under: 12c,BUG — mdinh @ 1:43 pm

Oracle like like a box of chocolate, you never know what you are going to get. (Reference: Forrest Gump movie)

After spending countless hours over weekend, I am reminded of quote, “Curiosity killed the cat, but satisfaction brought it back.”

Basically, I have been unsuccessful in rebuilding 12.1.0.1 RAC VM to test and validate another Upgrade BUG

The finding looks to match – root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3

Thank you to all who share their experiences !!!

==============================================================================================================
+++ FAILED STEP: TASK [oraswgi-install : Run root script after installation (Other Nodes)] ******
==============================================================================================================

Line 771: failed: [racnode-dc1-2] /u01/app/12.1.0.1/grid/root.sh", ["Check /u01/app/12.1.0.1/grid/install/root_racnode-dc1-2_2019-03-04_05-17-39.log for the output of root script"]
TASK [oraswgi-install : Run root script after installation (Other Nodes)] ******


[oracle@racnode-dc1-2 ~]$ cat /u01/app/12.1.0.1/grid/install/root_racnode-dc1-2_2019-03-04_05-17-39.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.1/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
2019/03/04 05:17:39 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2019/03/04 05:18:06 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2019/03/04 05:18:07 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
2019/03/04 05:18:44 CLSRSC-507: The root script cannot proceed on this node racnode-dc1-2 because either the first-node operations have not completed on node racnode-dc1-1 or there was an error in obtaining the status of the first-node operations.

Died at /u01/app/12.1.0.1/grid/crs/install/crsutils.pm line 3681.
The command '/u01/app/12.1.0.1/grid/perl/bin/perl -I/u01/app/12.1.0.1/grid/perl/lib -I/u01/app/12.1.0.1/grid/crs/install /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl ' execution failed
[oracle@racnode-dc1-2 ~]$

[oracle@racnode-dc1-2 ~]$ tail /etc/oracle-release
Oracle Linux Server release 7.3
[oracle@racnode-dc1-2 ~]$

[root@racnode-dc1-1 ~]# crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
[root@racnode-dc1-1 ~]#

==============================================================================================================
+++ CLSRSC-507: The root script cannot proceed on this node NODE2 because either the first-node operations have not completed on node NODE1 or there was an error in obtaining the status of the first-node operations.
==============================================================================================================

https://community.oracle.com/docs/DOC-1011444

Created by 3389670 on Feb 24, 2017 6:41 PM. Last modified by 3389670 on Feb 24, 2017 6:41 PM.
Visibility: Open to anyone

Problem Summary 
--------------------------------------------------- 
root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3

Problem Description 
--------------------------------------------------- 
root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3 
OLR initialization - successful 
2017/02/23 05:28:25 CLSRSC-507: The root script cannot proceed on this node NODE2 because either the first-node operations have not completed on node NODE1 or there was an error in obtaining the status of the first-node operations.

Died at /u01/app/12.1.0.2/grid/crs/install/crsutils.pm line 3681. 
The command '/u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2/grid/crs/install/roo

Error Codes 
--------------------------------------------------- 
CLSRSC-507

Problem Category/Subcategory 
--------------------------------------------------- 
Database RAC / Grid Infrastructure / Clusterware/Install / Upgrade / Patching issues

Solution 
---------------------------------------------------

1. Download latest JAN 2017 PSU 12.1.0.2.170117 (Jan 2017) Grid Infrastructure Patch Set Update (GI PSU) - 24917825

https://updates.oracle.com/download/24917825.html 

Platform or Language Linux86-64

2. Unzip downloaded patch as GRID user to directory

unzip p24917825_121020_Linux-x86-64.zip -d 

3. Run deconfig on both nodes

In the 2nd node as root user, 

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

In the 1st node as root user, 
/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force -lastnode

4. Once deconfig is completed then move forward on applying patching on both nodes in GRID Home

5. Move to unzip patch directory and apply patch using opatch manual

In 1st node, as grid user

cd /24917825/24732082 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828633 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828643 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/21436941 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

In 2nd node, as grid user

cd /24917825/24732082 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828633 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828643 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/21436941 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

6. Once Patch apply is completed on both node then move forward on invoking config.sh  

/u01/app/12.1.0.2/grid/crs/config/config.sh

or run root.sh directly on node1 and node2

/u01/app/12.1.0.2/grid/root.sh
Next Page »

Blog at WordPress.com.