WARNING: Please don’t try this at home or in production environment.
With that being said, patching was for DR production.
Oracle Interim Patch Installer version 12.2.0.1.16
Patching 2 nodes RAC cluster and node1 completed successfully.
Rationale for using -norestart because there was an issue at one time where datapatch was applied on the node1.
Don’t implement Active Data Guard and have database Start options: mount
# crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)' -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.dbproddr.db 2 ONLINE INTERMEDIATE node2 Mounted (Closed),STABLE ora.dbproddr.dbdr.svc 2 ONLINE OFFLINE STABLE -------------------------------------------------------------------------------- $ srvctl status database -d dbproddr -v Instance dbproddr1 is running on node node1 with online services dbdr. Instance status: Open,Readonly. Instance dbproddr2 is running on node node2. Instance status: Mounted (Closed).
Run opatchauto and ctrl-c from session is stuck.
node2 ~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019 node2 ~ # $GRID_HOME/OPatch/opatchauto apply $PATCH_TOP_DIR/28833531 -norestart OPatchauto session is initiated at Thu May 16 20:20:24 2019 System initialization log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-05-16_08-20-26PM.log. Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_08-20-47PM.log The id for this session is K43Y Executing OPatch prereq operations to verify patch applicability on home /u02/app/12.1.0/grid Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.1.0/db Patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db Patch applicability verified successfully on home /u02/app/12.1.0/grid Verifying SQL patch applicability on home /u01/app/oracle/product/12.1.0/db "/bin/sh -c 'cd /u01/app/oracle/product/12.1.0/db; ORACLE_HOME=/u01/app/oracle/product/12.1.0/db ORACLE_SID=dbproddr2 /u01/app/oracle/product/12.1.0/db/OPatch/datapatch -prereq -verbose'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command. SQL patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db Preparing to bring down database service on home /u01/app/oracle/product/12.1.0/db Successfully prepared home /u01/app/oracle/product/12.1.0/db to bring down database service Bringing down CRS service on home /u02/app/12.1.0/grid Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log CRS service brought down successfully on home /u02/app/12.1.0/grid Performing prepatch operation on home /u01/app/oracle/product/12.1.0/db Perpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db Start applying binary patch on home /u01/app/oracle/product/12.1.0/db Binary patch applied successfully on home /u01/app/oracle/product/12.1.0/db Performing postpatch operation on home /u01/app/oracle/product/12.1.0/db Postpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db Start applying binary patch on home /u02/app/12.1.0/grid Binary patch applied successfully on home /u02/app/12.1.0/grid Starting CRS service on home /u02/app/12.1.0/grid *** Ctrl-C as shown below *** ^C OPatchauto session completed at Thu May 16 21:41:58 2019 *** Time taken to complete the session 81 minutes, 34 seconds *** opatchauto failed with error code 130
This is not good as session disconnected while troubleshooting in another session.
node2 ~ # timed out waiting for input: auto-logout
Even though opatchauto session was terminated cluster upgrade state is [NORMAL] vs cluster upgrade state is [ROLLING PATCH]
node2 ~ # crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [323461694]. node2 ~ # crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)' node2 ~ # crsctl stat res -t -w 'TYPE = ora.database.type' -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.dbproddr.db 1 ONLINE ONLINE node1 Open,Readonly,STABLE 2 ONLINE ONLINE node2 Open,Readonly,STABLE --------------------------------------------------------------------------------
At this point, I was not sure what to do since everything looked good and online.
Colleague helping me with troubleshooting stated patch completed successfully and the main question if we need to try “opatchauto resume”
However, I was not comfortable with the outcome and tried opatchauto resume and it worked like magic.
Reconnect and opatchauto resume
mdinh@node2 ~ $ sudo su - ~ # . /home/oracle/working/dinh/gi.env The Oracle base has been set to /u01/app/oracle ORACLE_SID=+ASM4 ORACLE_BASE=/u01/app/oracle GRID_HOME=/u02/app/12.1.0/grid ORACLE_HOME=/u02/app/12.1.0/grid Oracle Instance alive for sid "+ASM4" ~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019/ ~ # $GRID_HOME/OPatch/opatchauto resume OPatchauto session is initiated at Thu May 16 22:03:09 2019 Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_10-03-10PM.log Resuming existing session with id K43Y Starting CRS service on home /u02/app/12.1.0/grid Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log CRS service started successfully on home /u02/app/12.1.0/grid Preparing home /u01/app/oracle/product/12.1.0/db after database service restarted OPatchauto is running in norestart mode. PDB instances will not be checked for database on the current node. No step execution required......... Trying to apply SQL patch on home /u01/app/oracle/product/12.1.0/db SQL patch applied successfully on home /u01/app/oracle/product/12.1.0/db OPatchAuto successful. --------------------------------Summary-------------------------------- Patching is completed successfully. Please find the summary as follows: Host:node2 RAC Home:/u01/app/oracle/product/12.1.0/db Version:12.1.0.2.0 Summary: ==Following patches were SKIPPED: Patch: /u01/software/patches/Jan2019/28833531/26983807 Reason: This patch is not applicable to this specified target type - "rac_database" Patch: /u01/software/patches/Jan2019/28833531/28729220 Reason: This patch is not applicable to this specified target type - "rac_database" ==Following patches were SUCCESSFULLY applied: Patch: /u01/software/patches/Jan2019/28833531/28729213 Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log Patch: /u01/software/patches/Jan2019/28833531/28731800 Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log Host:node2 CRS Home:/u02/app/12.1.0/grid Version:12.1.0.2.0 Summary: ==Following patches were SKIPPED: Patch: /u01/software/patches/Jan2019/28833531/26983807 Reason: This patch is already been applied, so not going to apply again. ==Following patches were SUCCESSFULLY applied: Patch: /u01/software/patches/Jan2019/28833531/28729213 Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log Patch: /u01/software/patches/Jan2019/28833531/28729220 Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log Patch: /u01/software/patches/Jan2019/28833531/28731800 Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log Patching session reported following warning(s): _________________________________________________ [WARNING] The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any, will not be applied. To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle). Refer to the readme to get the correct steps for applying the sql changes. [WARNING] The database instances will not be brought up under the 'norestart' option. The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any, will not be applied. To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle). Refer to the readme to get the correct steps for applying the sql changes. OPatchauto session completed at Thu May 16 22:10:01 2019 Time taken to complete the session 6 minutes, 52 seconds ~ #
Logs:
oracle@node2:/u02/app/12.1.0/grid/cfgtoollogs/crsconfig > ls -alrt total 508 drwxr-x--- 2 oracle oinstall 4096 Nov 23 02:15 oracle -rwxrwxr-x 1 oracle oinstall 167579 Nov 23 02:15 rootcrs_node2_2018-11-23_02-07-58AM.log drwxrwxr-x 9 oracle oinstall 4096 Apr 10 12:05 .. opatchauto apply - Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log ==================================================================================================== -rwxrwxr-x 1 oracle oinstall 33020 May 16 20:22 crspatch_node2_2019-05-16_08-21-16PM.log ==================================================================================================== Mysterious log file - Unknown where this log is from because it was not from my terminal output. ==================================================================================================== -rwxrwxr-x 1 oracle oinstall 86983 May 16 21:42 crspatch_node2_2019-05-16_08-27-35PM.log ==================================================================================================== -rwxrwxr-x 1 oracle oinstall 56540 May 16 22:06 srvmcfg1.log -rwxrwxr-x 1 oracle oinstall 26836 May 16 22:06 srvmcfg2.log -rwxrwxr-x 1 oracle oinstall 21059 May 16 22:06 srvmcfg3.log -rwxrwxr-x 1 oracle oinstall 23032 May 16 22:08 srvmcfg4.log opatchauto resume - Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log ==================================================================================================== -rwxrwxr-x 1 oracle oinstall 64381 May 16 22:09 crspatch_node2_2019-05-16_10-03-17PM.log ====================================================================================================
Prepatch operation log file.
> tail -20 crspatch_node2_2019-05-16_08-21-16PM.log 2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS 2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS ' 2019-05-16 20:22:04: Removing file /tmp/fileTChFoS 2019-05-16 20:22:04: Successfully removed file: /tmp/fileTChFoS 2019-05-16 20:22:04: pipe exit code: 0 2019-05-16 20:22:04: /bin/su successfully executed 2019-05-16 20:22:04: checkpoint ROOTCRS_POSTPATCH_OOP_REQSTEPS does not exist 2019-05-16 20:22:04: Done - Performing pre-pathching steps required for GI stack 2019-05-16 20:22:04: Resetting cluutil_trc_suff_pp to 0 2019-05-16 20:22:04: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS" 2019-05-16 20:22:04: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil0.log 2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS 2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS ' 2019-05-16 20:22:04: Removing file /tmp/fileDoYyQA 2019-05-16 20:22:04: Successfully removed file: /tmp/fileDoYyQA 2019-05-16 20:22:04: pipe exit code: 0 2019-05-16 20:22:04: /bin/su successfully executed *** 2019-05-16 20:22:04: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:SUCCESS ***
Mysterious log file – crspatch_node2_2019-05-16_08-27-35PM.log
2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL 2019-05-16 21:42:00: ###### Begin DIE Stack Trace ###### 2019-05-16 21:42:00: Package File Line Calling 2019-05-16 21:42:00: --------------- -------------------- ---- ---------- 2019-05-16 21:42:00: 1: main rootcrs.pl 267 crsutils::dietrap 2019-05-16 21:42:00: 2: crsutils crsutils.pm 1631 main::__ANON__ 2019-05-16 21:42:00: 3: crsutils crsutils.pm 1586 crsutils::system_cmd_capture_noprint 2019-05-16 21:42:00: 4: crsutils crsutils.pm 9098 crsutils::system_cmd_capture 2019-05-16 21:42:00: 5: crspatch crspatch.pm 988 crsutils::startFullStack 2019-05-16 21:42:00: 6: crspatch crspatch.pm 1121 crspatch::performPostPatch 2019-05-16 21:42:00: 7: crspatch crspatch.pm 212 crspatch::crsPostPatch 2019-05-16 21:42:00: 8: main rootcrs.pl 276 crspatch::new 2019-05-16 21:42:00: ####### End DIE Stack Trace ####### 2019-05-16 21:42:00: ROOTCRS_POSTPATCH checkpoint has failed 2019-05-16 21:42:00: ckpt: -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH 2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH" 2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil4.log 2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH 2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH ' 2019-05-16 21:42:00: Removing file /tmp/filewniUim 2019-05-16 21:42:00: Successfully removed file: /tmp/filewniUim 2019-05-16 21:42:00: pipe exit code: 0 2019-05-16 21:42:00: /bin/su successfully executed 2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status" 2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil5.log 2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status 2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status ' 2019-05-16 21:42:00: Removing file /tmp/fileK1Tyw6 2019-05-16 21:42:00: Successfully removed file: /tmp/fileK1Tyw6 2019-05-16 21:42:00: pipe exit code: 0 2019-05-16 21:42:00: /bin/su successfully executed 2019-05-16 21:42:00: The 'ROOTCRS_POSTPATCH' status is FAILED 2019-05-16 21:42:00: ROOTCRS_POSTPATCH state is FAIL 2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL" 2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil6.log 2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL 2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL ' 2019-05-16 21:42:00: Removing file /tmp/filej20epR 2019-05-16 21:42:00: Successfully removed file: /tmp/filej20epR 2019-05-16 21:42:00: pipe exit code: 0 2019-05-16 21:42:00: /bin/su successfully executed 2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:FAIL 2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL" 2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log 2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL 2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL ' 2019-05-16 21:42:01: Removing file /tmp/filely834C 2019-05-16 21:42:01: Successfully removed file: /tmp/filely834C 2019-05-16 21:42:01: pipe exit code: 0 2019-05-16 21:42:01: /bin/su successfully executed *** 2019-05-16 21:42:01: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL ***
Postpatch operation log file.
> tail -20 crspatch_node2_2019-05-16_10-03-17PM.log 2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START" 2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log 2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START 2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START ' 2019-05-16 22:09:59: Removing file /tmp/file0IogVl 2019-05-16 22:09:59: Successfully removed file: /tmp/file0IogVl 2019-05-16 22:09:59: pipe exit code: 0 2019-05-16 22:09:59: /bin/su successfully executed 2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:START 2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS" 2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil8.log 2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS 2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS ' 2019-05-16 22:09:59: Removing file /tmp/fileXDCkuM 2019-05-16 22:09:59: Successfully removed file: /tmp/fileXDCkuM 2019-05-16 22:09:59: pipe exit code: 0 2019-05-16 22:09:59: /bin/su successfully executed *** 2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:SUCCESS ***
Happy patching and hopefully patching primary to come will be seamlessly successful.