Thinking Out Loud

June 28, 2019

DGMGRL Using Help To Learn About New Validate Features

Filed under: 18c,Dataguard,dgmgrl — mdinh @ 3:57 pm

Wouldn’t be nicer and much better if Oracle would add (NF) for new features to help syntax?

DGMGRL for Linux: Release 12.2.0.1.0

[oracle@db-fs-1 bin]$ ./dgmgrl /
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Jun 28 17:49:16 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "orclcdb"
Connected as SYSDG.
DGMGRL> help validate

Performs an exhaustive set of validations for a member

Syntax:

  VALIDATE DATABASE [VERBOSE] <database name>;

  VALIDATE DATABASE [VERBOSE] <database name> DATAFILE <datafile number>
    OUTPUT=<file name>;

  VALIDATE DATABASE [VERBOSE] <database name> SPFILE;

  VALIDATE FAR_SYNC [VERBOSE] <far_sync name>
    [WHEN PRIMARY IS <database name>];

DGMGRL>

DGMGRL for Linux: Release 18.0.0.0.0

[oracle@ADC6160274 GDS]$ dgmgrl /
DGMGRL for Linux: Release 18.0.0.0.0 - Production on Fri Jun 28 15:54:36 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "chi"
Connected as SYSDG.
DGMGRL> help validate

Performs an exhaustive set of validations for a member

Syntax:

  VALIDATE DATABASE [VERBOSE] <database name>;

  VALIDATE DATABASE [VERBOSE] <database name> DATAFILE <datafile number>
    OUTPUT=<file name>;

  VALIDATE DATABASE [VERBOSE] <database name> SPFILE;

  VALIDATE FAR_SYNC [VERBOSE] <far_sync name>
    [WHEN PRIMARY IS <database name>];

  VALIDATE NETWORK CONFIGURATION FOR { ALL | <member name> }; [*** NF ***]

  VALIDATE STATIC CONNECT IDENTIFIER FOR { ALL | <database name> }; [*** NF ***]

DGMGRL>

validate network configuration

DGMGRL> validate network configuration for all;
Connecting to instance "sales" on database "sfo" ...
Connected to "sfo"
Checking connectivity from instance "sales" on database "sfo to instance "sales" on database "chi"...
Succeeded.
Connecting to instance "sales" on database "chi" ...
Connected to "chi"
Checking connectivity from instance "sales" on database "chi to instance "sales" on database "sfo"...
Succeeded.

Oracle Clusterware is not configured on database "sfo".
Connecting to database "sfo" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SLC02PNY.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=sfo_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "sfo".

Oracle Clusterware is not configured on database "chi".
Connecting to database "chi" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ADC6160274.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=chi_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "chi".

validate static connect identifier

DGMGRL> validate static connect identifier for all;
Oracle Clusterware is not configured on database "sfo".
Connecting to database "sfo" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SLC02PNY.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=sfo_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "sfo".

Oracle Clusterware is not configured on database "chi".
Connecting to database "chi" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ADC6160274.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=chi_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "chi".

DGMGRL>
Advertisements

May 5, 2019

What’s My Cluster Configuration

Filed under: 18c,Grid Infrastructure,RAC — mdinh @ 2:15 pm
[grid@ol7-183-node1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[grid@ol7-183-node1 ~]$ crsctl get cluster configuration
Name                : ol7-183-cluster
Configuration       : Cluster
Class               : Standalone Cluster
Type                : flex
The cluster is not extended.
--------------------------------------------------------------------------------
        MEMBER CLUSTER INFORMATION

      Name       Version        GUID                       Deployed Deconfigured
================================================================================
================================================================================

[grid@ol7-183-node1 ~]$ olsnodes -s -a -t
ol7-183-node1   Active  Hub     Unpinned
ol7-183-node2   Active  Hub     Unpinned

[grid@ol7-183-node1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [70732493] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28090564 28256701 ] have been applied on the local node. The release patch string is [18.3.0.0.0].

[grid@ol7-183-node1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [70732493].
[grid@ol7-183-node1 ~]$

May 4, 2019

Updating vagrant-boxes/OracleRAC

Filed under: 18c,RAC,Vagrant,VirtualBox — mdinh @ 7:17 pm

I have been playing with and finally able to complete 18c RAC installation using oracle/vagrant-boxes/OracleRAC

Honestly, I am still fond of Mikael Sandström oravirt vagrant-boxes, but having some trouble with installations and thought to try something new.

Here are updates performed for oracle/vagrant-boxes/OracleRAC on all nodes and only showing one node as example.

/etc/oratab is not updated:

[oracle@ol7-183-node2 ~]$ ps -ef|grep pmon
grid      1155     1  0 14:00 ?        00:00:00 asm_pmon_+ASM2
oracle   18223 18079  0 14:43 pts/0    00:00:00 grep --color=auto pmon
oracle   31653     1  0 14:29 ?        00:00:00 ora_pmon_hawk2

[oracle@ol7-183-node2 ~]$ tail /etc/oratab
#   $ORACLE_SID:$ORACLE_HOME::
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#

Update /etc/oratab [my framework works :=)]

[oracle@ol7-183-node2 ~]$ cat /etc/oratab
+ASM2:/u01/app/18.0.0.0/grid:N
hawk2:/u01/app/oracle/product/18.0.0.0/dbhome_1:N

[oracle@ol7-183-node2 ~]$ /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance not alive for sid "+ASM2"

[oracle@ol7-183-node2 ~]$ /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/18.0.0.0/dbhome_1
Oracle Instance alive for sid "hawk2"
[oracle@ol7-183-node2 ~]$

sudo for grid/oracle is not enabled:

[oracle@ol7-183-node2 ~]$ sudo /media/patch/findhomes.sh
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for oracle:
oracle is not in the sudoers file.  This incident will be reported.
[oracle@ol7-183-node2 ~]$ exit

Enable sudo for grid/oracle: (shown as example for oracle and should be same for grid)

[vagrant@ol7-183-node2 ~]$ sudo su -
[root@ol7-183-node2 ~]# visudo
[root@ol7-183-node2 ~]# grep oracle /etc/sudoers
oracle  ALL=(ALL)       ALL
oracle  ALL=(ALL)       NOPASSWD: ALL
[root@ol7-183-node2 ~]# logout

[vagrant@ol7-183-node2 ~]$ sudo su - oracle
Last login: Sat May  4 14:43:46 -04 2019 on pts/0

[oracle@ol7-183-node2 ~]$ sudo /media/patch/findhomes.sh
   PID NAME                 ORACLE_HOME
  1155 asm_pmon_+asm2       /u01/app/18.0.0.0/grid/
 31653 ora_pmon_hawk2       /u01/app/oracle/product/18.0.0.0/dbhome_1/
[oracle@ol7-183-node2 ~]$

Login banner:

dinh@CMWPHV1 MINGW64 /c/vagrant-boxes/OracleRAC (master)
$ vagrant ssh node2
Last login: Sat May  4 14:43:40 2019 from 10.0.2.2

Welcome to Oracle Linux Server release 7.6 (GNU/Linux 4.14.35-1844.1.3.el7uek.x86_64)

The Oracle Linux End-User License Agreement can be viewed here:

    * /usr/share/eula/eula.en_US

For additional packages, updates, documentation and community help, see:

    * http://yum.oracle.com/

[vagrant@ol7-183-node2 ~]$

Remove login banner:

[root@ol7-183-node2 ~]# cp -v /etc/motd /etc/motd.bak
‘/etc/motd’ -> ‘/etc/motd.bak’
[root@ol7-183-node2 ~]# cat /dev/null > /etc/motd
[root@ol7-183-node2 ~]# logout
[vagrant@ol7-183-node2 ~]$ logout
Connection to 127.0.0.1 closed.

dinh@CMWPHV1 MINGW64 /c/vagrant-boxes/OracleRAC (master)
$ vagrant ssh node2
Last login: Sat May  4 15:00:06 2019 from 10.0.2.2
[vagrant@ol7-183-node2 ~]$

Mandatory GIMR is not installed:

    node1: -----------------------------------------------------------------
    node1: INFO: 2019-05-04 14:01:02: Make GI config command
    node1: -----------------------------------------------------------------
    node1: -----------------------------------------------------------------
    node1: INFO: 2019-05-04 14:01:02: Grid Infrastructure configuration as 'RAC'
    node1: INFO: 2019-05-04 14:01:02: - ASM library   : ASMLIB
    node1: INFO: 2019-05-04 14:01:02: - without MGMTDB: true
    node1: -----------------------------------------------------------------
    node1: Launching Oracle Grid Infrastructure Setup Wizard...

[oracle@ol7-183-node1 ~]$ ps -ef|grep pmon
grid      7294     1  0 13:53 ?        00:00:00 asm_pmon_+ASM1
oracle   10986     1  0 14:29 ?        00:00:00 ora_pmon_hawk1
oracle   28642 28586  0 15:12 pts/0    00:00:00 grep --color=auto pmon

[oracle@ol7-183-node1 ~]$ ssh ol7-183-node2
Last login: Sat May  4 14:48:20 2019
[oracle@ol7-183-node2 ~]$ ps -ef|grep pmon
grid      1155     1  0 14:00 ?        00:00:00 asm_pmon_+ASM2
oracle   29820 29711  0 15:12 pts/0    00:00:00 grep --color=auto pmon
oracle   31653     1  0 14:29 ?        00:00:00 ora_pmon_hawk2
[oracle@ol7-183-node2 ~]$

Create GMIR:
How to Move/Recreate GI Management Repository (GIMR / MGMTDB) to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)
MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1)

[grid@ol7-183-node1 ~]$ ps -ef|grep pmon
grid      2286 27832  0 16:35 pts/0    00:00:00 grep --color=auto pmon
grid      7294     1  0 13:53 ?        00:00:00 asm_pmon_+ASM1
oracle   10986     1  0 14:29 ?        00:00:00 ora_pmon_hawk1

[grid@ol7-183-node1 ~]$ ll /tmp/mdbutil.*
-rwxr-xr-x. 1 grid oinstall 67952 May  4 16:02 /tmp/mdbutil.pl

[grid@ol7-183-node1 ~]$ /tmp/mdbutil.pl --status
mdbutil.pl version : 1.95
2019-05-04 16:35:44: I Checking CHM status...
2019-05-04 16:35:46: I Listener MGMTLSNR is configured and running on ol7-183-node1
2019-05-04 16:35:49: W MGMTDB is not configured on ol7-183-node1!
2019-05-04 16:35:49: W Cluster Health Monitor (CHM) is configured and not running on ol7-183-node1!

[grid@ol7-183-node1 ~]$ . /media/patch/gi.env
The Oracle base remains unchanged with value /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[grid@ol7-183-node1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     65520    63108                0           63108              0             Y  DATA/
MOUNTED  NORMAL  N         512             512   4096  4194304     16368    15260             4092            5584              0             N  RECO/

[grid@ol7-183-node1 ~]$ /tmp/mdbutil.pl --addmdb --target=+DATA -debug
mdbutil.pl version : 1.95
2019-05-04 16:36:57: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status diskgroup -g DATA
2019-05-04 16:36:58: D Exit code: 0
2019-05-04 16:36:58: D Output of last command execution:
Disk Group DATA is running on ol7-183-node1,ol7-183-node2
2019-05-04 16:36:58: I Starting To Configure MGMTDB at +DATA...
2019-05-04 16:36:58: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status mgmtlsnr
2019-05-04 16:36:59: D Exit code: 0
2019-05-04 16:36:59: D Output of last command execution:
Listener MGMTLSNR is enabled
2019-05-04 16:36:59: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status mgmtdb
2019-05-04 16:37:00: D Exit code: 1
2019-05-04 16:37:00: D Output of last command execution:
PRCD-1120 : The resource for database _mgmtdb could not be found.
2019-05-04 16:37:00: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status mgmtdb
2019-05-04 16:37:01: D Exit code: 1
2019-05-04 16:37:01: D Output of last command execution:
PRCD-1120 : The resource for database _mgmtdb could not be found.
2019-05-04 16:37:01: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl stop mgmtlsnr
2019-05-04 16:37:05: D Exit code: 0
2019-05-04 16:37:05: D Output of last command execution:
2019-05-04 16:37:05: D Executing: /u01/app/18.0.0.0/grid/bin/crsctl query crs activeversion
2019-05-04 16:37:05: D Exit code: 0
2019-05-04 16:37:05: D Output of last command execution:
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
2019-05-04 16:37:05: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl enable qosmserver
2019-05-04 16:37:06: D Exit code: 2
2019-05-04 16:37:06: D Output of last command execution:
PRKF-1321 : QoS Management Server is already enabled.
2019-05-04 16:37:06: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl start qosmserver
2019-05-04 16:37:07: D Exit code: 2
2019-05-04 16:37:07: D Output of last command execution:
PRCC-1014 : qosmserver was already running
2019-05-04 16:37:07: I Container database creation in progress... for GI 18.0.0.0.0
2019-05-04 16:37:07: D Executing: /u01/app/18.0.0.0/grid/bin/dbca  -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName DATA -datafileJarLocation /u01/app/18.0.0.0/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
2019-05-04 16:55:03: D Exit code: 0
2019-05-04 16:55:03: D Output of last command execution:
Prepare for db operation
2019-05-04 16:55:03: I Plugable database creation in progress...
2019-05-04 16:55:03: D Executing: /u01/app/18.0.0.0/grid/bin/mgmtca -local
2019-05-04 16:59:32: D Exit code: 0
2019-05-04 16:59:32: D Output of last command execution:
2019-05-04 16:59:32: D Executing: scp /tmp/mdbutil.pl ol7-183-node1:/tmp/
2019-05-04 16:59:33: D Exit code: 0
2019-05-04 16:59:33: D Output of last command execution:
2019-05-04 16:59:33: I Executing "/tmp/mdbutil.pl --addchm" on ol7-183-node1 as root to configure CHM.
2019-05-04 16:59:33: D Executing: ssh root@ol7-183-node1 "/tmp/mdbutil.pl --addchm"
root@ol7-183-node1's password:
2019-05-04 16:59:42: D Exit code: 1
2019-05-04 16:59:42: D Output of last command execution:
mdbutil.pl version : 1.95
2019-05-04 16:59:42: W Not able to execute "/tmp/mdbutil.pl --addchm" on ol7-183-node1 as root to configure CHM.
2019-05-04 16:59:42: D Executing: scp /tmp/mdbutil.pl ol7-183-node2:/tmp/
2019-05-04 16:59:43: D Exit code: 0
2019-05-04 16:59:43: D Output of last command execution:
2019-05-04 16:59:43: I Executing "/tmp/mdbutil.pl --addchm" on ol7-183-node2 as root to configure CHM.
2019-05-04 16:59:43: D Executing: ssh root@ol7-183-node2 "/tmp/mdbutil.pl --addchm"
root@ol7-183-node2's password:
2019-05-04 16:59:51: D Exit code: 1
2019-05-04 16:59:51: D Output of last command execution:
mdbutil.pl version : 1.95
2019-05-04 16:59:51: W Not able to execute "/tmp/mdbutil.pl --addchm" on ol7-183-node2 as root to configure CHM.
2019-05-04 16:59:51: I MGMTDB & CHM configuration done!

[root@ol7-183-node1 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[root@ol7-183-node1 ~]# crsctl start res ora.crf -init
CRS-2501: Resource 'ora.crf' is disabled
CRS-4000: Command Start failed, or completed with errors.

[root@ol7-183-node1 ~]# crsctl modify res ora.crf -attr ENABLED=1 -init

[root@ol7-183-node1 ~]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'ol7-183-node1'
CRS-2676: Start of 'ora.crf' on 'ol7-183-node1' succeeded

[root@ol7-183-node1 ~]# crsctl stat res ora.crf -init
NAME=ora.crf
TYPE=ora.crf.type
TARGET=ONLINE
STATE=ONLINE on ol7-183-node1

[root@ol7-183-node1 ~]# ll /tmp/mdbutil.pl
-rwxr-xr-x. 1 grid oinstall 67952 May  4 16:59 /tmp/mdbutil.pl
[root@ol7-183-node1 ~]# /tmp/mdbutil.pl --addchm
mdbutil.pl version : 1.95
2019-05-04 17:02:54: I Starting To Configure CHM...
2019-05-04 17:02:55: I CHM has already been configured!
2019-05-04 17:02:57: I CHM Configure Successfully Completed!
[root@ol7-183-node1 ~]#

[root@ol7-183-node1 ~]# ssh ol7-183-node2
Last login: Sat May  4 16:28:28 2019
[root@ol7-183-node2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM2"
[root@ol7-183-node2 ~]# crsctl stat res ora.crf -init
NAME=ora.crf
TYPE=ora.crf.type
TARGET=OFFLINE
STATE=OFFLINE

[root@ol7-183-node2 ~]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@ol7-183-node2 ~]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'ol7-183-node2'
CRS-2676: Start of 'ora.crf' on 'ol7-183-node2' succeeded
[root@ol7-183-node2 ~]# crsctl stat res ora.crf -init
NAME=ora.crf
TYPE=ora.crf.type
TARGET=ONLINE
STATE=ONLINE on ol7-183-node2

[root@ol7-183-node2 ~]# ll /tmp/mdbutil.pl
-rwxr-xr-x. 1 grid oinstall 67952 May  4 16:59 /tmp/mdbutil.pl
[root@ol7-183-node2 ~]# /tmp/mdbutil.pl --addchm
mdbutil.pl version : 1.95
2019-05-04 17:04:41: I Starting To Configure CHM...
2019-05-04 17:04:41: I CHM has already been configured!
2019-05-04 17:04:44: I CHM Configure Successfully Completed!

[root@ol7-183-node2 ~]# logout
Connection to ol7-183-node2 closed.
[root@ol7-183-node1 ~]# logout

[grid@ol7-183-node1 ~]$ /tmp/mdbutil.pl --status
mdbutil.pl version : 1.95
2019-05-04 17:04:54: I Checking CHM status...
2019-05-04 17:04:56: I Listener MGMTLSNR is configured and running on ol7-183-node1
2019-05-04 17:04:59: I Database MGMTDB is configured and running on ol7-183-node1
2019-05-04 17:05:00: I Cluster Health Monitor (CHM) is configured and running
--------------------------------------------------------------------------------
CHM Repository Path = +DATA/_MGMTDB/881717C3357B4146E0536538A8C05D2C/DATAFILE/sysmgmtdata.291.1007398657
MGMTDB space used on DG +DATA = 23628 Mb
--------------------------------------------------------------------------------
[grid@ol7-183-node1 ~]$

Due to role separation, fix broken script for lspatches.

[grid@ol7-183-node2 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
+ /u01/app/18.0.0.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/18.0.0.0/grid/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch lspatches

====================================================================================================
OPatch could not create/open history file for writing.
====================================================================================================

27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ exit
[grid@ol7-183-node2 ~]$

====================================================================================================

[root@ol7-183-node2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM2"
[root@ol7-183-node2 ~]# chmod 775 -R $ORACLE_HOME/cfgtoollogs

[root@ol7-183-node2 ~]# . /media/patch/hawk.env
The Oracle base has been changed from /u01/app/grid to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/18.0.0.0/dbhome_1
Oracle Instance alive for sid "hawk2"
[root@ol7-183-node2 ~]# chmod 775 -R $ORACLE_HOME/cfgtoollogs

====================================================================================================

[vagrant@ol7-183-node2 ~]$ sudo su - grid /media/patch/lspatches.sh
Last login: Sat May  4 18:16:38 -04 2019
+ /u01/app/18.0.0.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/18.0.0.0/grid/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ exit
[vagrant@ol7-183-node2 ~]$ sudo su - oracle /media/patch/lspatches.sh
Last login: Sat May  4 18:15:18 -04 2019 on pts/0
+ /u01/app/18.0.0.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/18.0.0.0/grid/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ exit
[vagrant@ol7-183-node2 ~]$

I will update post as I progress.

May 3, 2019

GRID Out Of Place (OOP) Rollback Disaster

Filed under: 18c,Grid Infrastructure,RAC — mdinh @ 4:45 pm

Now I understand the hesitation to use Oracle new features, especially any auto.

It may just be simpler and less stress to perform manual task having control and knowing what is being executed and validated.

GRID Out Of Place (OOP) patching completed successfully for 18.6.0.0.0.

GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1

Here is an example of inventory after patching.

+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

Run cluvfy was successful too.

[oracle@racnode-dc1-1 ~]$ cluvfy stage -post crsinst -n racnode-dc1-1,racnode-dc1-2 -verbose

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Apr 30, 2019 8:17:49 PM
CVU home:                     /u01/18.3.0.0/grid_2/
User:                         oracle
[oracle@racnode-dc1-1 ~]$

GRID OOP Rollback Patching completed successfully for node1.

[root@racnode-dc1-1 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode-dc1-1 ~]#
[root@racnode-dc1-1 ~]# echo $GRID_HOME
/u01/18.3.0.0/grid_2
[root@racnode-dc1-1 ~]# $GRID_HOME/OPatch/opatchauto rollback -switch-clone -logLevel FINEST

OPatchauto session is initiated at Fri May  3 01:06:47 2019

System initialization log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchautodb/systemconfig2019-05-03_01-06-50AM.log.

Session log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/opatchauto2019-05-03_01-08-00AM.log
The id for this session is R47N

Update nodelist in the inventory for oracle home /u01/18.3.0.0/grid.
Update nodelist in the inventory is completed for oracle home /u01/18.3.0.0/grid.


Bringing down CRS service on home /u01/18.3.0.0/grid
CRS service brought down successfully on home /u01/18.3.0.0/grid


Starting CRS service on home /u01/18.3.0.0/grid
CRS service started successfully on home /u01/18.3.0.0/grid


Confirm that all resources have been started from home /u01/18.3.0.0/grid.
All resources have been started successfully from home /u01/18.3.0.0/grid.


OPatchAuto successful.

--------------------------------Summary--------------------------------
Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-1
Actual Home : /u01/18.3.0.0/grid_2
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1

OPatchauto session completed at Fri May  3 01:14:25 2019
Time taken to complete the session 7 minutes, 38 seconds

[root@racnode-dc1-1 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[root@racnode-dc1-1 ~]# /media/patch/findhomes.sh
   PID NAME                 ORACLE_HOME
 10486 asm_pmon_+asm1       /u01/18.3.0.0/grid/
 10833 apx_pmon_+apx1       /u01/18.3.0.0/grid/

[root@racnode-dc1-1 ~]# cat /etc/oratab
#Backup file is  /u01/app/oracle/12.1.0.1/db1/srvm/admin/oratab.bak.racnode-dc1-1 line added by Agent
#+ASM1:/u01/18.3.0.0/grid:N
hawk1:/u01/app/oracle/12.1.0.1/db1:N
hawk:/u01/app/oracle/12.1.0.1/db1:N             # line added by Agent
[root@racnode-dc1-1 ~]#

GRID OOP Rollback Patching completed successfully for node2.

[root@racnode-dc1-2 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode-dc1-2 ~]#
[root@racnode-dc1-2 ~]# echo $GRID_HOME
/u01/18.3.0.0/grid_2
[root@racnode-dc1-2 ~]# $GRID_HOME/OPatch/opatchauto rollback -switch-clone -logLevel FINEST

OPatchauto session is initiated at Fri May  3 01:21:39 2019

System initialization log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchautodb/systemconfig2019-05-03_01-21-41AM.log.

Session log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/opatchauto2019-05-03_01-22-46AM.log
The id for this session is 9RAT

Update nodelist in the inventory for oracle home /u01/18.3.0.0/grid.
Update nodelist in the inventory is completed for oracle home /u01/18.3.0.0/grid.


Bringing down CRS service on home /u01/18.3.0.0/grid
CRS service brought down successfully on home /u01/18.3.0.0/grid


Starting CRS service on home /u01/18.3.0.0/grid
CRS service started successfully on home /u01/18.3.0.0/grid


Confirm that all resources have been started from home /u01/18.3.0.0/grid.
All resources have been started successfully from home /u01/18.3.0.0/grid.


OPatchAuto successful.

--------------------------------Summary--------------------------------
Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-2
Actual Home : /u01/18.3.0.0/grid_2
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1


OPatchauto session completed at Fri May  3 01:40:51 2019
Time taken to complete the session 19 minutes, 12 seconds
[root@racnode-dc1-2 ~]#

GRID OOP Rollback completed successfully for 18.5.0.0.0.

GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1

Here is an example of inventory after rollback.

+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

Validation shows database is OFFLINE,

+ crsctl stat res -w '((TARGET != ONLINE) or (STATE != ONLINE)' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            IDLE,STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE                               Instance Shutdown,STABLE
      2        ONLINE  OFFLINE                               Instance Shutdown,STABLE

Start database FAILED.

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance not alive for sid "hawk2"

[oracle@racnode-dc1-2 ~]$ srvctl status database -d $ORACLE_UNQNAME -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is not running on node racnode-dc1-2

[oracle@racnode-dc1-2 ~]$ srvctl start database -d $ORACLE_UNQNAME
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
[oracle@racnode-dc1-2 ~]$


[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance not alive for sid "hawk1"

[oracle@racnode-dc1-1 ~]$ srvctl start database -d hawk
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy
CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
[oracle@racnode-dc1-1 ~]$

Incorrect permissions for oracle library was the cause.
Change permissions for $GRID_HOME/bin/oracle (chmod 6751 $GRID_HOME/bin/oracle), stop and start CRS resolved the failure.

[oracle@racnode-dc1-1 dbs]$ ls -lhrt $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle dba 314M Apr 20 16:06 /u01/app/oracle/12.1.0.1/db1/bin/oracle

[oracle@racnode-dc1-1 dbs]$ ls -lhrt /u01/18.3.0.0/grid/bin/oracle
-rwxr-x--x 1 oracle oinstall 396M Apr 20 19:21 /u01/18.3.0.0/grid/bin/oracle

[oracle@racnode-dc1-1 dbs]$ cd /u01/18.3.0.0/grid/bin/
[oracle@racnode-dc1-1 bin]$ chmod 6751 oracle
[oracle@racnode-dc1-1 bin]$ ls -lhrt /u01/18.3.0.0/grid/bin/oracle
-rwsr-s--x 1 oracle oinstall 396M Apr 20 19:21 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-1 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
[root@racnode-dc1-1 ~]# crsctl stop crs

====================================================================================================

[root@racnode-dc1-2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM2"

[root@racnode-dc1-2 ~]# ls -lhrt $GRID_HOME/bin/oracle
-rwxr-x--x 1 oracle oinstall 396M Apr 21 01:44 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-2 ~]# chmod 6751 $GRID_HOME/bin/oracle
[root@racnode-dc1-2 ~]# ls -lhrt $GRID_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 396M Apr 21 01:44 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-2 ~]# crsctl stop crs

====================================================================================================

[root@racnode-dc1-2 ~]# crsctl start crs
[root@racnode-dc1-1 ~]# crsctl start crs

Reference: RAC Database Can’t Start: ORA-01565, ORA-17503: ksfdopn:10 Failed to open file +DATA/BPBL/spfileBPBL.ora (Doc ID 2316088.1)

April 22, 2019

Final Conclusion for 18c Cluster upgrade state is [NORMAL]

Filed under: 18c,RAC — mdinh @ 3:46 am

Finally, I have reached a point that I can live with for Grid 18c upgrade because the process runs to completion without any error and intervention.

Note that ACFS Volume is created in CRS DiskGroup which may not be ideal for production.

Rapid Home Provisioning Server is configured and is not running.

The outcome is different depending on whether the upgrade is performed via GUI or silent as demonstrated 18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Rene Antunez also demonstrates another method UPGRADE ORACLE GI FROM 12.1 TO 18.5 FAILS AND LEAVES CRS WITH STATUS OF UPGRADE FINAL

While we both encountered the same error “Upgrading RHP Repository failed”, we accomplished the same results via different course of action.

The unexplained and unanswered questions is, “Why RHP Repository is being upgraded?”

Ultimately, it is cluvfy that change for cluster upgrade state and this is shown from gridSetupActions2019-04-21_02-10-47AM.log

INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:45:37 AM] Executing RHPUPGRADE

INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
INFO: [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'

INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:46:34 AM] Executing CLUVFY
INFO: [Apr 21, 2019 2:46:34 AM] Command /u01/18.3.0.0/grid/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all

INFO: [Apr 21, 2019 2:51:37 AM] Completed Plugin named: cvu
INFO: [Apr 21, 2019 2:51:38 AM] ConfigClient.saveSession method called
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'
INFO: [Apr 21, 2019 2:51:38 AM] Completed 'Oracle Cluster Verification Utility'

INFO: [Apr 21, 2019 2:51:38 AM] Successfully executed the flow in SILENT mode
INFO: [Apr 21, 2019 2:51:39 AM] inventory location is/u01/app/oraInventory
INFO: [Apr 21, 2019 2:51:39 AM] Exit Status is 0
INFO: [Apr 21, 2019 2:51:39 AM] Shutdown Oracle Grid Infrastructure 18c Installer

I would suggest to run the last step using GUI if feasible versus silent to see what is happening:

/u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

So how did I get myself into this predicament? I followed blindly. I trust but did not verify.

18.1.0.0 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.4 and later on Oracle Linux (Doc ID 2369422.1)

Step 2.1 - Understand how MGMTDB is handled during upgrade

****************************************************************************************************
Upgrading GI 18.1 does not require upgrading MGMTDB nor does it require installing a MGMTDB if it currently does not exist. 
It's the user's discretion to maintain and upgrade the MGMTDB for their application needs.
****************************************************************************************************

Note: MGMTDB is required when using Rapid Host Provisioning. 
The Cluster Health Monitor functionality will not work without MGMTDB configured.
If you consider to install a MGMTDB later,  it is configured to use 1G of SGA and 500 MB of PGA. 
MGMTDB SGA will not be allocated in hugepages (this is because it's init.ora setting 'use_large_pages' is set to false.

The following parameters from (Doc ID 2369422.1) were the root cause for all the issues in my test cases.

Because MGMTDB is not required, it makes sense to set the following but resulted in chaos.

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

How To Setup a Rapid Home Provisioning (RHP) Server and Client (Doc ID 2097026.1)

Starting with Oracle Grid Infrastructure 18.1.0.0.0, when you install Oracle Grid Infrastructure, the Rapid Home Provisioning Server is configured, by default, in the local mode to support the local switch home capability. 

Rapid Home Provisioning Server is configured, by default and there does not look to be documented or easily found option to not install or bypass default.

RHPS is interchangeable between Server and Service.

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

Here is what worked from end to end without any failure or invention.
The response file was ***not*** modified for each of the test cases.

/u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs \
-applyRU /media/patch/Jan2019/28828717 \
-responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp

Here is what the environment looks like after the 18c GI upgrade.

Notice ACFS is configured for RHP.

[oracle@racnode-dc1-1 ~]$ /media/patch/crs_Query.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
+ exit

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc1-1 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.chad
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-1            169.254.7.214 172.16
                                                             .9.10,STABLE
ora.asm
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc1-1            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,HOME=/u01/app/o
                                                             racle/12.1.0.1/db1,S
                                                             TABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.rhpserver
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ srvctl config mgmtdb -all
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
  /u01/18.3.0.0/grid on node racnode-dc1-1
Oracle user: oracle
Spfile: +CRS/_MGMTDB/PARAMETERFILE/spfile.271.1006137461
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: GIMR_DSCREP_10
PDB service: GIMR_DSCREP_10
Cluster name: vbox-rac-dc1
Management database is enabled.
Management database is individually enabled on nodes:
Management database is individually disabled on nodes:
Database instance: -MGMTDB

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.crs.ghchkpt.acfs -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
               OFFLINE OFFLINE      racnode-dc1-2            volume /opt/oracle/r
                                                             hp_images/chkbase is
                                                             unmounted,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w 'TYPE = ora.acfs.type' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-61
VOLUME_DEVICE=/dev/asm/ghchkpt-61

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init
NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1

[oracle@racnode-dc1-1 ~]$ mount|egrep -i 'asm|ghchkpt'
oracleasmfs on /dev/oracleasm type oracleasmfs (rw,relatime)

[oracle@racnode-dc1-1 ~]$ acfsutil version
acfsutil version: 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ acfsutil registry
Mount Object:
  Device: /dev/asm/ghchkpt-61
  Mount Point: /opt/oracle/rhp_images/chkbase
  Disk Group: CRS
  Volume: GHCHKPT
  Options: none
  Nodes: all
  Accelerator Volumes:

[oracle@racnode-dc1-1 ~]$ acfsutil info fs
acfsutil info fs: ACFS-03036: no mounted ACFS file systems

[oracle@racnode-dc1-1 ~]$ acfsutil info storage
Diskgroup      Consumer      Space     Size With Mirroring  Usable Free  %Free   Path
CRS                          59.99              59.99          34.95       58%
DATA                         99.99              99.99          94.76       94%
FRA                          59.99              59.99          59.43       99%
----
unit of measurement: GB

[root@racnode-dc1-1 ~]# srvctl start filesystem -device /dev/asm/ghchkpt-61
PRCA-1138 : failed to start one or more file system resources:
CRS-2501: Resource 'ora.crs.ghchkpt.acfs' is disabled
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ asmcmd -V
asmcmd version 18.0.0.0.0

[oracle@racnode-dc1-1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_diskoting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    35784                0           35784                        Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304    102396    97036                0           97036                        N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     61436    60856                0           60856                        N  FRA/

[oracle@racnode-dc1-1 ~]$ srvctl status rhpserver
Rapid Home Provisioning Server is enabled
Rapid Home Provisioning Server is not running

[oracle@racnode-dc1-1 ~]$ ps -ef|grep [p]mon
oracle    3571     1  0 02:40 ?        00:00:03 mdb_pmon_-MGMTDB
oracle   17109     1  0 Apr20 ?        00:00:04 asm_pmon_+ASM1
oracle   17531     1  0 Apr20 ?        00:00:06 ora_pmon_hawk1
[oracle@racnode-dc1-1 ~]$

Let me show you how this is convoluted.
In my case, it’s easy because there were only 2 actions performed.
Do you know what GridSetupAction was performed based on the directory name?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 18:59 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 18:56 GridSetupActions2019-04-21_02-10-47AM

This is how you can find out.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs
$ ls -ld G*
drwxrwx--- 3 oracle oinstall 4096 Apr 21 19:20 GridSetupActions2019-04-20_06-51-48PM
drwxrwx--- 2 oracle oinstall 4096 Apr 21 19:22 GridSetupActions2019-04-21_02-10-47AM

================================================================================
### gridSetup.sh -silent -skipPrereqs -applyRU
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ ll
total 13012
-rw-r----- 1 oracle oinstall   20562 Apr 20 19:09 AttachHome2019-04-20_06-51-48PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall       0 Apr 20 18:59 gridSetupActions2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall 7306374 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall 2374182 Apr 20 19:09 gridSetupActions2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall 3582408 Apr 20 18:59 installerPatchActions_2019-04-20_06-51-48PM.log
-rw-r----- 1 oracle oinstall       0 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.err
-rw-r----- 1 oracle oinstall       0 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall     157 Apr 20 19:02 oraInstall2019-04-20_06-51-48PM.out
-rw-r----- 1 oracle oinstall      29 Apr 20 19:09 oraInstall2019-04-20_06-51-48PM.out.racnode-dc1-2
drwxrwx--- 2 oracle oinstall    4096 Apr 20 19:01 temp_ob
-rw-r----- 1 oracle oinstall   12467 Apr 20 19:09 time2019-04-20_06-51-48PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep ROOTSH_LOCATION gridSetupActions2019-04-20_06-51-48PM.log
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/rootupgrade.sh'. Received the value from a code block.
INFO: Setting variable 'ROOTSH_LOCATION' to '/u01/18.3.0.0/grid/root.sh'. Received the value from a code block.

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-20_06-51-48PM
$ grep "Execute Root Scripts successful" time2019-04-20_06-51-48PM.log
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914
 # Execute Root Scripts successful. # 3228 # 1555780156914

================================================================================
### gridSetup.sh -executeConfigTools -silent
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ ll
total 1116
-rw-r----- 1 oracle oinstall       0 Apr 21 02:10 gridSetupActions2019-04-21_02-10-47AM.err
-rw-r----- 1 oracle oinstall  122568 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall 1004378 Apr 21 02:51 gridSetupActions2019-04-21_02-10-47AM.out
-rw-r----- 1 oracle oinstall     129 Apr 21 02:10 installerPatchActions_2019-04-21_02-10-47AM.log
-rw-r----- 1 oracle oinstall    3155 Apr 21 02:51 time2019-04-21_02-10-47AM.log

oracle@racnode-dc1-1:hawk1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep rhprepos *
gridSetupActions2019-04-21_02-10-47AM.log:INFO:  [Apr 21, 2019 2:45:37 AM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-21_02-10-47AM
$ grep executeSelectedTools gridSetupActions2019-04-21_02-10-47AM.log
INFO:  [Apr 21, 2019 2:11:37 AM] Entering ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate oAggregate=oracle.crs:oracle.crs:18.0.0.0.0:common
INFO:  [Apr 21, 2019 2:11:37 AM] ConfigClient.executeSelectedToolsInAggregate action assigned
INFO:  [Apr 21, 2019 2:51:38 AM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 21, 2019 2:51:38 AM] Exiting ConfigClient.executeSelectedToolsInAggregate method

It might be better to use GUI if available but be careful.

For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.

I was using X and connection was lost during the upgrade. It was a kiss of death with this being the last screen capture.

Rene’s quote:

After looking for information in MOS, there wasn’t much that could lead me on how to solve the issue, just a lot of bugs related to the RHP repository.

I was lucky enough to get on a call with a good friend (@_rickgonzalez ) who is the PM of the RHP and we were able to work through it. So below is what I was able to do to solve the issue.

Also it was confirmed by them , that this is a bug in the upgrade process of 18.X, so hopefully they will be fixing it soon.

I concur and conclude, the process for GI 18c Upgrade is overly complicated, convoluted, contradicting, and not clearly documented, all having to do with MGMTDB and Rapid Home Provisioning (RHP) repository.

Unless you’re lucky or know someone, good luck with your upgrade.

Lastly, it would be greatly appreciated if you would share your upgrade experiences and/or results.

Did you use GUI or silent?

UPDATE 1:

FAQ: 12c Grid Infrastructure Management Repository (GIMR) (Doc ID 1568402.1)
====================================================================================================
What’s the implications of not configuring Management Database during installation/upgrade?

In 12.1.0.1, GIMR is optional, if Management Database is not selected to be configured during installation/upgrade, all features (Cluster Health Monitor (CHM/OS) etc) that depend on it will be disabled.

This changed in 12.1.0.2, it’s mandatory to have GIMR and it’s not supported to be turned off with the exception of Exadata.
====================================================================================================

This may explains why having the following parameter for RAC (Non-Exadata) would fail.

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

rhprepos upgradeSchema -fromversion 12.1.0.2.0

oracle@racnode-dc1-1:+ASM1:/media/patch/upgrade18c/run2/GridSetupActions2019-04-21_02-10-47AM
$ grep -B2 -A1000 'Executing RHPUPGRADE' gridSetupActions2019-04-21_02-10-47AM.log
INFO:  [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 21, 2019 2:45:37 AM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 21, 2019 2:45:37 AM] Executing RHPUPGRADE
INFO:  [Apr 21, 2019 2:45:37 AM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
INFO:  [Apr 21, 2019 2:45:37 AM] ... GenericInternalPlugIn.handleProcess() entered.
INFO:  [Apr 21, 2019 2:45:37 AM] ... GenericInternalPlugIn: getting configAssistantParmas.
INFO:  [Apr 21, 2019 2:45:37 AM] ... GenericInternalPlugIn: checking secretArguments.
INFO:  [Apr 21, 2019 2:45:37 AM] No arguments to pass to stdin
INFO:  [Apr 21, 2019 2:45:37 AM] ... GenericInternalPlugIn: starting read loop.
INFO:  [Apr 21, 2019 2:46:31 AM] Completed Plugin named: rhpupgrade
INFO:  [Apr 21, 2019 2:46:31 AM] ConfigClient.saveSession method called

----------------------------------------------------------------------------------------------------
INFO:  [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
INFO:  [Apr 21, 2019 2:46:31 AM] Completed 'Upgrading RHP Repository'
----------------------------------------------------------------------------------------------------

INFO:  [Apr 21, 2019 2:46:34 AM] Started Plugin named: cvu

----------------------------------------------------------------------------------------------------
INFO:  [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO:  [Apr 21, 2019 2:46:34 AM] Starting 'Oracle Cluster Verification Utility'
INFO:  [Apr 21, 2019 2:46:34 AM] Executing CLUVFY
INFO:  [Apr 21, 2019 2:46:34 AM] Command /u01/18.3.0.0/grid/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
----------------------------------------------------------------------------------------------------

INFO:  [Apr 21, 2019 2:51:37 AM] Completed Plugin named: cvu
INFO:  [Apr 21, 2019 2:51:38 AM] Successfully executed the flow in SILENT mode
INFO:  [Apr 21, 2019 2:51:39 AM] inventory location is/u01/app/oraInventory

----------------------------------------------------------------------------------------------------
INFO:  [Apr 21, 2019 2:51:39 AM] Exit Status is 0
INFO:  [Apr 21, 2019 2:51:39 AM] Shutdown Oracle Grid Infrastructure 18c Installer
----------------------------------------------------------------------------------------------------

oracle@racnode-dc1-1:+ASM1:/media/patch/upgrade18c/run2/GridSetupActions2019-04-21_02-10-47AM
$

April 16, 2019

18c Upgrade: Failed gridSetup.sh -executeConfigTools: Cluster upgrade state is [UPGRADE FINAL]

Filed under: 18c,RAC — mdinh @ 9:53 pm

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

This is a multi-part series for 18c Upgrade and suggest read the above 2 posts first.

Commands for gridSetup.sh

+ /u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs -applyRU /media/patch/Jan2019/28828717 -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false
Preparing the home to patch...
Applying the patch /media/patch/Jan2019/28828717...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/installerPatchActions_2019-04-16_06-19-12AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/18.3.0.0/grid/install/response/grid_2019-04-16_06-19-12AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/gridSetupActions2019-04-16_06-19-12AM.log

As a root user, execute the following script(s):
        1. /u01/18.3.0.0/grid/rootupgrade.sh

Execute /u01/18.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc1-1, racnode-dc1-2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp [-silent]


+ exit
oracle@racnode-dc1-1::/home/oracle
$

Basically, the error provided is utterly useless.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
oracle@racnode-dc1-1::/home/oracle

Check logs from directory /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cd /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ ls -alrt
total 1072
-rw-r----- 1 oracle oinstall     130 Apr 16 12:59 installerPatchActions_2019-04-16_12-59-56PM.log
-rw-r----- 1 oracle oinstall       0 Apr 16 12:59 gridSetupActions2019-04-16_12-59-56PM.err
drwxrwx--- 8 oracle oinstall    4096 Apr 16 13:01 ..
-rw-r----- 1 oracle oinstall 1004378 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.out
-rw-r----- 1 oracle oinstall    2172 Apr 16 13:01 time2019-04-16_12-59-56PM.log ***
-rw-r----- 1 oracle oinstall   73047 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.log ***
drwxrwx--- 2 oracle oinstall    4096 Apr 16 13:01 .

Check time2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ cat time2019-04-16_12-59-56PM.log
 # Message # ElapsedTime # Current Time ( ms )
 # Starting step:INITIALIZE_ACTION of state:init #  0  # 1555412405106
 # Finished step:INITIALIZE_ACTION of state:init # 1 # 1555412405106
 # Starting step:EXECUTE of state:init #  0  # 1555412405108
 # Finished step:EXECUTE of state:init # 3 # 1555412405111
 # Starting step:VALIDATE of state:init #  0  # 1555412405113
 # Finished step:VALIDATE of state:init # 2 # 1555412405115
 # Starting step:TRANSITION of state:init #  0  # 1555412405115
 # Finished step:TRANSITION of state:init # 2 # 1555412405117
 # Starting step:EXECUTE of state:CRSConfigTools #  0  # 1555412405117
 # Finished step:EXECUTE of state:CRSConfigTools # 813 # 1555412405930
 # Starting step:VALIDATE of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:VALIDATE of state:CRSConfigTools # 0 # 1555412405930
 # Starting step:TRANSITION of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:TRANSITION of state:CRSConfigTools # 26591 # 1555412432521
 # Starting step:INITIALIZE_ACTION of state:setup #  0  # 1555412432521
 # Finished step:INITIALIZE_ACTION of state:setup # 0 # 1555412432521
 # Starting step:EXECUTE of state:setup #  0  # 1555412432522
 # Finished step:EXECUTE of state:setup # 6 # 1555412432528
 # Configuration in progress. #  0  # 1555412436788
 # Update Inventory in progress. #  0  # 1555412437768
 # Update Inventory successful. # 52612 # 1555412490380
 # Upgrading RHP Repository in progress. #  0  # 1555412490445

================================================================================
 # Upgrading RHP Repository failed. # 12668 # 1555412503112
================================================================================

 # Starting step:VALIDATE of state:setup #  0  # 1555412503215
 # Finished step:VALIDATE of state:setup # 15 # 1555412503230
 # Starting step:TRANSITION of state:setup #  0  # 1555412503230
 # Finished step:TRANSITION of state:setup # 0 # 1555412503230
 # Starting step:EXECUTE of state:finish #  0  # 1555412503230
 # Finished step:EXECUTE of state:finish # 6 # 1555412503236
 # Starting step:VALIDATE of state:finish #  0  # 1555412503237
 # Finished step:VALIDATE of state:finish # 1 # 1555412503238
 # Starting step:TRANSITION of state:finish #  0  # 1555412503238
 # Finished step:TRANSITION of state:finish # 0 # 1555412503238

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

Check gridSetupActions2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -B2 -A100 'Executing RHPUPGRADE' gridSetupActions2019-04-16_12-59-56PM.log
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn.handleProcess() entered.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: getting configAssistantParmas.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: checking secretArguments.
INFO:  [Apr 16, 2019 1:01:30 PM] No arguments to pass to stdin
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: starting read loop.
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 16, 2019 1:01:43 PM] Exiting ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus SUCCESS_MINUS_RECTOOL to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Calling event ConfigSessionEnding
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.endSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Configuration
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus FAILURE to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] All forked task are completed at state setup
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <setup>

================================================================================
WARNING:  [Apr 16, 2019 1:01:43 PM] [WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
================================================================================

INFO:  [Apr 16, 2019 1:01:43 PM] Advice is CONTINUE
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <setup>
INFO:  [Apr 16, 2019 1:01:43 PM] Verifying route success
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Executing action at state finish
INFO:  [Apr 16, 2019 1:01:43 PM] FinishAction Actions.execute called
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] Completed executing action at state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Moved to state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <finish>
WARNING:  [Apr 16, 2019 1:01:43 PM] Validation disabled for the state finish
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Terminating all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Terminated all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Successfully executed the flow in SILENT mode
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] inventory location is/u01/app/oraInventory
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application

================================================================================
INFO:  [Apr 16, 2019 1:01:43 PM] Exit Status is -1
INFO:  [Apr 16, 2019 1:01:43 PM] Shutdown Oracle Grid Infrastructure 18c Installer
INFO:  [Apr 16, 2019 1:01:43 PM] Unloading Setup Driver
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

Due to Exit Status is -1 is probably why – The cluster upgrade state is [UPGRADE FINAL]

Why Upgrading RHP Repository when oracle_install_crs_ConfigureRHPS=false?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -i rhp *
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:04 PM] Setting value for the property:oracle_install_crs_ConfigureRHPS in the bean:CRSInstallSettings
gridSetupActions2019-04-16_12-59-56PM.log: oracle_install_crs_ConfigureRHPS                       false
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Created config job for rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Selecting job named 'Upgrading RHP Repository' for retry
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Started Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository in progress. #  0  # 1555412490445
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository failed. # 12668 # 1555412503112
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

ora.cvu does not report any errors.

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

Run rhprepos upgradeSchema -fromversion 12.1.0.2.0 – FAILED.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
PRCT-1474 : failed to run 'mgmtca' on node racnode-dc1-2.

oracle@racnode-dc1-1::/home/oracle
$ ps -ef|grep pmon
oracle    9722  4804  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle   10380     1  0 13:46 ?        00:00:01 asm_pmon_+ASM1
oracle   10974     1  0 13:46 ?        00:00:01 apx_pmon_+APX1
oracle   11218     1  0 13:47 ?        00:00:02 ora_pmon_hawk1
oracle@racnode-dc1-1::/home/oracle
$ ssh racnode-dc1-2
Last login: Tue Apr 16 18:44:30 2019

----------------------------------------
Welcome to racnode-dc1-2
OracleLinux 7.3 x86_64

FQDN: racnode-dc1-2.internal.lab
IP:   10.0.2.15

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5709 MB
Kernel:    4.1.12-61.1.18.el7uek.x86_64

----------------------------------------

oracle@racnode-dc1-2::/home/oracle
$ ps -ef|grep pmon
oracle    9219     1  0 13:44 ?        00:00:01 asm_pmon_+ASM2
oracle   10113     1  0 13:45 ?        00:00:01 apx_pmon_+APX2
oracle   10619     1  0 13:45 ?        00:00:01 ora_pmon_hawk2
oracle   13200 13178  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle@racnode-dc1-2::/home/oracle
$

In conclusion, the silent upgrade process is poorly documented at best.

Starting to wondering if the following parameters contributed to the issue:

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

April 15, 2019

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Filed under: 18c,RAC — mdinh @ 12:54 am

There are/were a lot of discussions about Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]
on how cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade could have changed the cluster upgrade state to [NORMAL].

Running gridSetup.sh -executeConfigTools in silent mode, the next step cluvfy is not run.

[oracle@racnode-dc1-1 ~]$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-15_01-02-06AM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
[oracle@racnode-dc1-1 ~]$

Running gridSetup.sh -executeConfigTools in GUI, there is an option to ignore Failed Upgrading RHP Repository and continue to the next step to run cluvfy.

I don’t think cluvfy modify the state of the cluster but rather ora.cvu did due to the existing of the following files.

[root@racnode-dc1-1 install]# pwd
/u01/app/oracle/crsdata/@global/cvu/baseline/install
[root@racnode-dc1-1 install]# ll
total 36000
-rw-r--r-- 1 oracle oinstall 35958465 Apr 14 06:05 grid_install_12.1.0.2.0.xml
-rw-r--r-- 1 oracle oinstall   901803 Apr 15 01:42 grid_install_18.0.0.0.0.zip
[root@racnode-dc1-1 install]# 

When checking RESULTS from ora.cvu, there are no errors.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-1 ~]$
[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
[oracle@racnode-dc1-1 ~]$ 

Hell! What do I know as I am just a RAC novice and happy the cluster state is what it should be.

gridsetup_upgrade.rsp used for upgrade.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

April 13, 2019

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

Filed under: 18c,RAC — mdinh @ 11:13 pm

After upgrade and apply RU for Grid 18c, the cluster upgrade state was not NORMAL.

The cluster upgrade state is [UPGRADE FINAL] which I have never seen before.

Searching Oracle Support was useless as I was only able to find the following states:

The cluster upgrade state is [NORMAL]
The cluster upgrade state is [FORCED]
The cluster upgrade state is [ROLLING PATCH]

The following checks were performed after upgrade:

[oracle@racnode-dc1-1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].

[oracle@racnode-dc1-1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].

[oracle@racnode-dc1-1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
[oracle@racnode-dc1-1 ~]$

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-1 ~]#


[oracle@racnode-dc1-2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[oracle@racnode-dc1-2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]

[oracle@racnode-dc1-2 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2532936542].

[oracle@racnode-dc1-2 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].

[oracle@racnode-dc1-2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

[root@racnode-dc1-2 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-2 ~]#

Check OCR: Grid Infrastructure Upgrade : The cluster upgrade state is [FORCED] (Doc ID 2482606.1)
I was desperate and OCR was fine.

[root@racnode-dc1-1 ~]# olsnodes -c
vbox-rac-dc1

[root@racnode-dc1-1 ~]# olsnodes -t -a -s -n
racnode-dc1-1   1       Active  Hub     Unpinned
racnode-dc1-2   2       Active  Hub     Unpinned

[root@racnode-dc1-1 ~]# $GRID_HOME/bin/ocrdump /tmp/ocrdump.txt

[root@racnode-dc1-1 ~]# grep SYSTEM.version.hostnames /tmp/ocrdump.txt
[SYSTEM.version.hostnames]
[SYSTEM.version.hostnames.racnode-dc1-1]
[SYSTEM.version.hostnames.racnode-dc1-1.patchlevel]
[SYSTEM.version.hostnames.racnode-dc1-1.site]
[SYSTEM.version.hostnames.racnode-dc1-2]
[SYSTEM.version.hostnames.racnode-dc1-2.patchlevel]
[SYSTEM.version.hostnames.racnode-dc1-2.site]
[root@racnode-dc1-1 ~]#

Thanks to my friend Vlatko J. https://twitter.com/jvlatko

Run cluvfy:

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ which cluvfy
/u01/18.3.0.0/grid/bin/cluvfy

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade

Baseline collected.
Collection report for this execution is saved in file "/u01/app/oracle/crsdata/@global/cvu/baseline/install/grid_install_18.0.0.0.0.zip".

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Apr 13, 2019 11:05:58 PM
CVU home:                     /u01/18.3.0.0/grid/
User:                         oracle
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

After running cluvfy, the cluster upgrade state is [NORMAL].

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-1 ~]#

[root@racnode-dc1-2 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-2 ~]#

February 26, 2019

Oracle BAD 18c Grid Image

Filed under: 18c — mdinh @ 2:25 pm

I am screwed before I even start.

Why doesn’t Oracle update the bad image and release a new one!

Downloaded LINUX.X64_180000_grid_home.zip (Oracle Database 18c Grid Infrastructure (18.3) for Linux x86-64)
from Oracle Database 18c (18.3)

Unzip software:

unzip -qod /u01/app/18.0.0/grid /sf_OracleSoftware/LINUX.X64_180000_grid_home.zip

Run: runcluvfy.sh stage -pre crsinst FAILED

/u01/app/18.0.0/grid/runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.1.0.1/grid -dest_crshome /u01/app/18.0.0/grid -dest_version 18.0.0.0.0 -fixup -verbose

Pre-check for cluster services setup was unsuccessful.

Checks did not pass for the following nodes:
        racnode-dc1-2,racnode-dc1-1


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Physical Memory ...FAILED
racnode-dc1-2: PRVF-7530 : Sufficient physical memory is not available on node
               "racnode-dc1-2" [Required physical memory = 8GB (8388608.0KB)]

racnode-dc1-1: PRVF-7530 : Sufficient physical memory is not available on node
               "racnode-dc1-1" [Required physical memory = 8GB (8388608.0KB)]

Verifying Swap Size ...FAILED
racnode-dc1-2: PRVF-7573 : Sufficient swap size is not available on node
               "racnode-dc1-2" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB
               (2097148.0KB)]

racnode-dc1-1: PRVF-7573 : Sufficient swap size is not available on node
               "racnode-dc1-1" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB
               (2097148.0KB)]

Verifying Check incorrectly sized ASM Disks ...FAILED
PRCT-1065 : Failed to verify the size consistency of ASM disks on node
"racnode-dc1-1". kfod execution failed at location "/u01/app/18.0.0/grid//bin".
Detailed error:
/u01/app/18.0.0/grid//bin/kfod.bin: error while loading shared libraries:
libasmclntsh18.so: cannot open shared object file: No such file or directory


CVU operation performed:      stage -pre crsinst
Date:                         Feb 25, 2019 10:16:18 PM
CVU home:                     /u01/app/18.0.0/grid/
User:                         oracle

Software was not properly tested.

It was probably a mistake and software should have been downloaded from https://edelivery.oracle.com

NOPE: Same results.

Unzip software:

unzip -qod /u01/app/18.0.0/grid /sf_OracleSoftware/18cLinux/V978971-01.zip

Pre-check for cluster services setup was unsuccessful.

Checks did not pass for the following nodes:
        racnode-dc1-2,racnode-dc1-1


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Physical Memory ...FAILED
racnode-dc1-2: PRVF-7530 : Sufficient physical memory is not available on node
               "racnode-dc1-2" [Required physical memory = 8GB (8388608.0KB)]

racnode-dc1-1: PRVF-7530 : Sufficient physical memory is not available on node
               "racnode-dc1-1" [Required physical memory = 8GB (8388608.0KB)]

Verifying Swap Size ...FAILED
racnode-dc1-2: PRVF-7573 : Sufficient swap size is not available on node
               "racnode-dc1-2" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB
               (2097148.0KB)]

racnode-dc1-1: PRVF-7573 : Sufficient swap size is not available on node
               "racnode-dc1-1" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB
               (2097148.0KB)]

Verifying Check incorrectly sized ASM Disks ...FAILED
PRCT-1065 : Failed to verify the size consistency of ASM disks on node
"racnode-dc1-1". kfod execution failed at location "/u01/app/18.0.0/grid//bin".
Detailed error:
/u01/app/18.0.0/grid//bin/kfod.bin: error while loading shared libraries:
libasmclntsh18.so: cannot open shared object file: No such file or directory


CVU operation performed:      stage -pre crsinst
Date:                         Feb 26, 2019 2:37:48 PM
CVU home:                     /u01/app/18.0.0/grid/
User:                         oracle
[oracle@racnode-dc1-1 ~]$ ll /u01/app/18.0.0/grid/bin/k*
-rw-r--r-- 1 oracle oinstall      0 Jul 18  2018 /u01/app/18.0.0/grid/bin/kfed
-rwxr-xr-x 1 oracle oinstall    472 Jul 18  2018 /u01/app/18.0.0/grid/bin/kfod
-rwxr-x--x 1 oracle oinstall 144740 Jul 18  2018 /u01/app/18.0.0/grid/bin/kfod.bin
-rw-r--r-- 1 oracle oinstall      0 Jul 18  2018 /u01/app/18.0.0/grid/bin/kgmgr
[oracle@racnode-dc1-1 ~]$

PRCT-1065 Failures During cluvfy Upgrade Verification ( Doc ID 2279848.1 )

Oracle support is suggesting to apply Oracle® Database Patch 28828717 – GI Release Update 18.5.0.0.190115 following

How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed? ( Doc ID 1410202.1 )

UPDATE1:

Option to apply RU to existing image did not work.

 
### Patch 28828717 - GI Release Update 18.5.0.0.190115

[oracle@racnode-dc1-1 ~]$ env|egrep -i 'oracle|home'
USER=oracle
MAIL=/var/spool/mail/oracle
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
HOME=/home/oracle
LOGNAME=oracle
OLDPWD=/u01/app/oracle/patch

[oracle@racnode-dc1-1 ~]$ ls -l /u01/app/oracle/patch/28828717
total 148
drwxr-x--- 4 oracle oinstall   4096 Jan  9 23:37 28435192
drwxr-x--- 4 oracle oinstall   4096 Jan  9 23:38 28547619
drwxr-x--- 4 oracle oinstall   4096 Jan  9 23:37 28822489
drwxr-x--- 5 oracle oinstall   4096 Jan  9 23:36 28864593
drwxr-x--- 5 oracle oinstall   4096 Jan  9 23:34 28864607
drwxr-x--- 2 oracle oinstall   4096 Jan  9 23:38 automation
-rw-rw-r-- 1 oracle oinstall   5828 Jan  9 16:02 bundle.xml
-rw-r--r-- 1 oracle oinstall 117023 Jan  9 15:43 README.html
-rw-r--r-- 1 oracle oinstall      0 Jan  9 23:37 README.txt

[oracle@racnode-dc1-1 ~]$ /u01/app/18.0.0/grid/gridSetup.sh -silent -applyRU /u01/app/oracle/patch/28828717
Preparing the home to patch...
Applying the patch /u01/app/oracle/patch/28828717...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2019-02-28_00-26-20AM/installerPatchActions_2019-02-28_00-26-20AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-40426] Grid installation option has not been specified.
   ACTION: Specify the valid installation option.
[oracle@racnode-dc1-1 ~]$

====================================================================================================

[oracle@racnode-dc1-1 bin]$ ps -ef|grep smon
oracle   16991     1  0 Feb27 ?        00:00:00 asm_smon_+ASM1
root     17031     1  0 Feb27 ?        00:00:30 /u01/app/12.1.0.1/grid/bin/osysmond.bin
oracle   17558     1  0 Feb27 ?        00:00:00 mdb_smon_-MGMTDB
oracle   17971     1  0 Feb27 ?        00:00:00 ora_smon_hawk1
oracle   23381 22842  0 00:36 pts/1    00:00:00 grep --color=auto smon

[oracle@racnode-dc1-1 bin]$ ps -ef|grep d.bin
root      1664     1  0 Feb27 ?        00:00:41 /u01/app/12.1.0.1/grid/bin/ohasd.bin reboot
root      5319     1  0 Feb27 ?        00:00:14 /u01/app/12.1.0.1/grid/bin/orarootagent.bin
oracle    5686     1  0 Feb27 ?        00:00:14 /u01/app/12.1.0.1/grid/bin/oraagent.bin
oracle    5752     1  0 Feb27 ?        00:00:09 /u01/app/12.1.0.1/grid/bin/mdnsd.bin
oracle    5759     1  0 Feb27 ?        00:00:27 /u01/app/12.1.0.1/grid/bin/evmd.bin
oracle    6476     1  0 Feb27 ?        00:00:09 /u01/app/12.1.0.1/grid/bin/gpnpd.bin
oracle    6571  5759  0 Feb27 ?        00:00:08 /u01/app/12.1.0.1/grid/bin/evmlogger.bin -o /u01/app/12.1.0.1/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u01/app/12.1.0.1/grid/log/[HOSTNAME]/evmd/evmlogger.log
oracle    6947     1  0 Feb27 ?        00:00:23 /u01/app/12.1.0.1/grid/bin/gipcd.bin
root      8467     1  0 Feb27 ?        00:00:10 /u01/app/12.1.0.1/grid/bin/cssdmonitor
root      8565     1  0 Feb27 ?        00:00:10 /u01/app/12.1.0.1/grid/bin/cssdagent
oracle    8638     1  0 Feb27 ?        00:00:25 /u01/app/12.1.0.1/grid/bin/ocssd.bin
root     12389     1  0 Feb27 ?        00:00:20 /u01/app/12.1.0.1/grid/bin/octssd.bin reboot
root     17031     1  0 Feb27 ?        00:00:30 /u01/app/12.1.0.1/grid/bin/osysmond.bin
root     17060     1  0 Feb27 ?        00:00:39 /u01/app/12.1.0.1/grid/bin/crsd.bin reboot
oracle   17144     1  0 Feb27 ?        00:00:24 /u01/app/12.1.0.1/grid/bin/oraagent.bin
root     17152     1  0 Feb27 ?        00:00:22 /u01/app/12.1.0.1/grid/bin/orarootagent.bin
root     17236     1  1 Feb27 ?        00:01:16 /u01/app/12.1.0.1/grid/bin/ologgerd -M -d /u01/app/12.1.0.1/grid/crf/db/racnode-dc1-1
oracle   17332     1  0 Feb27 ?        00:00:00 /u01/app/12.1.0.1/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
oracle   17340     1  0 Feb27 ?        00:00:00 /u01/app/12.1.0.1/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracle   17368     1  0 Feb27 ?        00:00:00 /u01/app/12.1.0.1/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit
root     20540 17031  0 Feb27 ?        00:00:05 /u01/app/12.1.0.1/grid/perl/bin/perl /u01/app/12.1.0.1/grid/bin/diagsnap.pl start
oracle   23390 22842  0 00:36 pts/1    00:00:00 grep --color=auto d.bin

====================================================================================================

[oracle@racnode-dc1-1 bin]$ ./crsctl stat res -t
-bash: ./crsctl: No such file or directory
[oracle@racnode-dc1-1 bin]$ ls crs*
crscdpd.bin  crsctl.bin  crsd.bin  crsdiag.pl  crsrename.pl  crstmpl.scr

[oracle@racnode-dc1-1 bin]$ /u01/app/18.0.0/grid/OPatch/opatch lsinventory -detail
Oracle Interim Patch Installer version 12.2.0.1.16
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/18.0.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 12.2.0.1.16
OUI version       : 12.2.0.4.0
Log file location : /u01/app/18.0.0/grid/cfgtoollogs/opatch/opatch2019-02-28_00-37-47AM_1.log

List of Homes on this system:

  Home name= OraGI12Home1, Location= "/u01/app/12.1.0.1/grid"
  Home name= OraDB12Home1, Location= "/u01/app/oracle/12.1.0.1/db1"
LsInventorySession failed: RawInventory gets null OracleHomeInfo

OPatch failed with error code 73
[oracle@racnode-dc1-1 bin]$

====================================================================================================

[oracle@racnode-dc1-2 ~]$ cd /u01/app/18.0.0/grid/
[oracle@racnode-dc1-2 grid]$ ls
[oracle@racnode-dc1-2 grid]$

UPDATE2:

From Oracle support:

Hello Michael - 

Bugs are not published.

I am going to raise a predefect and check this with tier 1 engineers. Before I do that however, can you try using the latest version of cluvfy and see if the issue reproduces - 

HOW TO UPGRADE CLUVFY IN CRS_HOME ( Doc ID 969282.1 ) 
Goal 
The CVU Cluster Verification Utility is enhanced independently of CRS. 
Newer versions of CVU can be downloaded from 
http://www.oracle.com/technetwork/database/options/clustering/downloads/cvu-download-homepage-099973.html 
and installed independently from Clusterware. 

Unfortunately, my VM had issues with snapshots and cannot test until new one is created and having nightmares with creating new VM.

UPDATE3

	
Hello, 

Cannot use the cluvfy from https://www.oracle.com/technetwork/database/options/clustering/downloads/cvu-download-homepage-099973.html because it's not the correct version. 

Please validate your info as you are sending me on a wild goose chase to try this and that wasting a lot of my time. 

[oracle@racnode-dc1-1 bin]$ ./cluvfy -version 
12.2.0.1.0 Build 061318x8664 
[oracle@racnode-dc1-1 bin]$ ./cluvfy stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.1.0.1/grid -dest_crshome /u01/app/18.0.0/grid -dest_version 18.0.0.0.0 
PRKC-1137 : Unable to find Version object with string value 18.0.0.0.0 

Verification cannot proceed 

[oracle@racnode-dc1-1 bin]$ 

UPDATE4

This behavior seems to be related to 
Bug 27447782 - HPI_181: PRE UPGRADE CHECK HIT PRCT1065 KFOD EXECUTION FAILED 

This fix will be included in a future 18 OCW RU as per 
Bug 29467750 - CONTENT INCLUSION OF 27447782 IN OCW RU 18.0.0.0.0 

So please upload "opatch lsinventory -detail" output so we can request backport on top of our current patching level. 
Is there any plan to apply any other patch in the near future? If so, please share the patch number so we can confirm there won't be conflicts or request a merge patch 

Create a free website or blog at WordPress.com.