Thinking Out Loud

June 7, 2019

RAC Installation Logs

Filed under: 12c,RAC — mdinh @ 5:24 pm

Note to self for 2 Nodes RAC installation and DB creation logs location.

Oracle Universal Installer logs for GI/DB:

[oracle@racnode-dc1-1 logs]$ pwd; ls -lhrt
/u01/app/oraInventory/logs
total 2.3M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-13-PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-35-PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.8K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-13-PM.log
-rw-r----- 1 oracle oinstall  153 Jun  7 17:06 oraInstall2019-06-07_04-57-13-PM.out
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
-rw-r----- 1 oracle oinstall   47 Jun  7 17:09 time2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:09 oraInstall2019-06-07_05-09-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:13 oraInstall2019-06-07_05-09-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall   29 Jun  7 17:14 oraInstall2019-06-07_05-09-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:14 AttachHome2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  507 Jun  7 17:14 silentInstall2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  14K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 9.5K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  496 Jun  7 17:15 oraInstall2019-06-07_05-09-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 17:15 installActions2019-06-07_05-09-01PM.log
[oracle@racnode-dc1-1 logs]$

silentInstall*.log

[oracle@racnode-dc1-1 logs]$ grep successful silent*.log

silentInstall2019-06-07_04-39-01PM.log:The installation of Oracle Grid Infrastructure 12c was successful.

silentInstall2019-06-07_05-09-01PM.log:The installation of Oracle Database 12c was successful.

[oracle@racnode-dc1-1 logs

installActions*.log

[oracle@racnode-dc1-1 logs]$ grep "Using paramFile" install*.log

installActions2019-06-07_04-39-01PM.log:INFO: Using paramFile: /u01/stage/12.1.0.2/grid/install/oraparam.ini

installActions2019-06-07_05-09-01PM.log:Using paramFile: /u01/stage/12.1.0.2/database/install/oraparam.ini

[oracle@racnode-dc1-1 logs]$

Run root script after installation:
$GRID_HOME/root.sh

[oracle@racnode-dc1-1 install]$ pwd; ls -lhrt root*.log
/u01/app/12.1.0.2/grid/install
-rw------- 1 oracle oinstall 7.4K Jun  7 16:51 root_racnode-dc1-1_2019-06-07_16-44-37.log
[oracle@racnode-dc1-1 install]$

Run configToolAllCommands:
$GRID_HOME/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/u01/stage/rsp/configtoolallcommands.rsp

[oracle@racnode-dc1-1 oui]$ pwd; ls -lhrt
/u01/app/12.1.0.2/grid/cfgtoollogs/oui
total 1.2M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r--r-- 1 oracle oinstall    0 Jun  7 16:57 configActions2019-06-07_04-57-10-PM.err
-rw-r--r-- 1 oracle oinstall  13K Jun  7 17:06 configActions2019-06-07_04-57-10-PM.log
-rw------- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw------- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
[oracle@racnode-dc1-1 oui]$

dbca

[oracle@racnode-dc1-1 dbca]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca
total 116K
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:02 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM.lck
drwxrwxr-x 3 oracle oinstall 4.0K Jun  7 17:02 _mgmtdb
-rwxrwxr-x 1 oracle oinstall 105K Jun  7 17:03 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM
drwxr-x--- 2 oracle oinstall 4.0K Jun  7 17:23 hawk
[oracle@racnode-dc1-1 dbca]$

dbca _mgmtdb

[oracle@racnode-dc1-1 _mgmtdb]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb
total 19M
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 16:58 trace.log.lck
-rwxrwxr-x 1 oracle oinstall  18M Jun  7 16:59 tempControl.ctl
-rwxrwxr-x 1 oracle oinstall  349 Jun  7 16:59 CloneRmanRestore.log
-rwxrwxr-x 1 oracle oinstall  596 Jun  7 16:59 cloneDBCreation.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:00 rmanUtil
-rwxrwxr-x 1 oracle oinstall 2.1K Jun  7 17:00 plugDatabase.log
-rwxrwxr-x 1 oracle oinstall  428 Jun  7 17:01 dbmssml_catcon_12271.lst
-rwxrwxr-x 1 oracle oinstall 3.5K Jun  7 17:01 dbmssml0.log
-rwxrwxr-x 1 oracle oinstall  396 Jun  7 17:01 postScripts.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:01 lockAccount.log
-rwxrwxr-x 1 oracle oinstall  442 Jun  7 17:01 catbundleapply_catcon_12348.lst
-rwxrwxr-x 1 oracle oinstall 3.9K Jun  7 17:01 catbundleapply0.log
-rwxrwxr-x 1 oracle oinstall  424 Jun  7 17:01 utlrp_catcon_12416.lst
-rwxrwxr-x 1 oracle oinstall 9.2K Jun  7 17:02 utlrp0.log
-rwxrwxr-x 1 oracle oinstall  964 Jun  7 17:02 postDBCreation.log
-rwxrwxr-x 1 oracle oinstall  737 Jun  7 17:02 OraGI12Home1__mgmtdb_creation_checkpoint.xml
-rwxrwxr-x 1 oracle oinstall  877 Jun  7 17:02 _mgmtdb.log
-rwxrwxr-x 1 oracle oinstall 1.1M Jun  7 17:02 trace.log
-rwxrwxr-x 1 oracle oinstall 1.3K Jun  7 17:02 DetectOption.log
drwxrwxr-x 2 oracle oinstall 4.0K Jun  7 17:03 vbox_rac_dc1

[oracle@racnode-dc1-1 _mgmtdb]$ tail _mgmtdb.log
Completing Database Creation
DBCA_PROGRESS : 68%
DBCA_PROGRESS : 79%
DBCA_PROGRESS : 89%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb.
Database Information:
Global Database Name:_mgmtdb
System Identifier(SID):-MGMTDB
[oracle@racnode-dc1-1 _mgmtdb]$

dbca hawk

[oracle@racnode-dc1-1 hawk]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/hawk
total 34M
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 trace.log.lck
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 rmanUtil
-rw-r----- 1 oracle oinstall  18M Jun  7 17:17 tempControl.ctl
-rw-r----- 1 oracle oinstall  384 Jun  7 17:17 CloneRmanRestore.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 17:20 cloneDBCreation.log
-rw-r----- 1 oracle oinstall    8 Jun  7 17:20 postScripts.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:21 CreateClustDBViews.log
-rw-r----- 1 oracle oinstall    6 Jun  7 17:21 lockAccount.log
-rw-r----- 1 oracle oinstall 1.3K Jun  7 17:22 postDBCreation.log
-rw-r----- 1 oracle oinstall  511 Jun  7 17:23 OraDB12Home1_hawk_creation_checkpoint.xml
-rw-r----- 1 oracle oinstall  24K Jun  7 17:23 hawk.log
-rw-r----- 1 oracle oinstall  16M Jun  7 17:23 trace.log

[oracle@racnode-dc1-1 hawk]$ tail hawk.log
DBCA_PROGRESS : 73%
DBCA_PROGRESS : 76%
DBCA_PROGRESS : 85%
DBCA_PROGRESS : 94%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/hawk.
Database Information:
Global Database Name:hawk
System Identifier(SID) Prefix:hawk
[oracle@racnode-dc1-1 hawk]$
Advertisements

May 19, 2019

Shocking opatchauto resume works after auto-logout

Filed under: 12c,opatchauto,RAC — mdinh @ 5:36 pm

WARNING: Please don’t try this at home or in production environment.

With that being said, patching was for DR production.

Oracle Interim Patch Installer version 12.2.0.1.16

Patching 2 nodes RAC cluster and node1 completed successfully.

Rationale for using -norestart because there was an issue at one time where datapatch was applied on the node1.

Don’t implement Active Data Guard and have database Start options: mount

# crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.dbproddr.db
      2        ONLINE  INTERMEDIATE node2              Mounted (Closed),STABLE
ora.dbproddr.dbdr.svc
      2        ONLINE  OFFLINE                                          STABLE
--------------------------------------------------------------------------------

$ srvctl status database -d dbproddr -v
Instance dbproddr1 is running on node node1 with online services dbdr. Instance status: Open,Readonly.
Instance dbproddr2 is running on node node2. Instance status: Mounted (Closed).

Run opatchauto and ctrl-c from session is stuck.

node2 ~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019

node2 ~ # $GRID_HOME/OPatch/opatchauto apply $PATCH_TOP_DIR/28833531 -norestart

OPatchauto session is initiated at Thu May 16 20:20:24 2019

System initialization log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-05-16_08-20-26PM.log.

Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_08-20-47PM.log
The id for this session is K43Y

Executing OPatch prereq operations to verify patch applicability on home /u02/app/12.1.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.1.0/db
Patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db

Patch applicability verified successfully on home /u02/app/12.1.0/grid


Verifying SQL patch applicability on home /u01/app/oracle/product/12.1.0/db
"/bin/sh -c 'cd /u01/app/oracle/product/12.1.0/db; ORACLE_HOME=/u01/app/oracle/product/12.1.0/db ORACLE_SID=dbproddr2 /u01/app/oracle/product/12.1.0/db/OPatch/datapatch -prereq -verbose'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command.

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db


Preparing to bring down database service on home /u01/app/oracle/product/12.1.0/db
Successfully prepared home /u01/app/oracle/product/12.1.0/db to bring down database service


Bringing down CRS service on home /u02/app/12.1.0/grid
Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log
CRS service brought down successfully on home /u02/app/12.1.0/grid


Performing prepatch operation on home /u01/app/oracle/product/12.1.0/db
Perpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db


Start applying binary patch on home /u01/app/oracle/product/12.1.0/db
Binary patch applied successfully on home /u01/app/oracle/product/12.1.0/db


Performing postpatch operation on home /u01/app/oracle/product/12.1.0/db
Postpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db


Start applying binary patch on home /u02/app/12.1.0/grid

Binary patch applied successfully on home /u02/app/12.1.0/grid


Starting CRS service on home /u02/app/12.1.0/grid





*** Ctrl-C as shown below ***
^C
OPatchauto session completed at Thu May 16 21:41:58 2019
*** Time taken to complete the session 81 minutes, 34 seconds ***

opatchauto failed with error code 130

This is not good as session disconnected while troubleshooting in another session.

node2 ~ # timed out waiting for input: auto-logout

Even though opatchauto session was terminated cluster upgrade state is [NORMAL] vs cluster upgrade state is [ROLLING PATCH]

node2 ~ # crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [323461694].

node2 ~ # crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
node2 ~ # crsctl stat res -t -w 'TYPE = ora.database.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.dbproddr.db
      1        ONLINE  ONLINE       node1              Open,Readonly,STABLE
      2        ONLINE  ONLINE       node2              Open,Readonly,STABLE
--------------------------------------------------------------------------------

At this point, I was not sure what to do since everything looked good and online.

Colleague helping me with troubleshooting stated patch completed successfully and the main question if we need to try “opatchauto resume”

However, I was not comfortable with the outcome and tried opatchauto resume and it worked like magic.

Reconnect and opatchauto resume

mdinh@node2 ~ $ sudo su - 
~ # . /home/oracle/working/dinh/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM4
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u02/app/12.1.0/grid
ORACLE_HOME=/u02/app/12.1.0/grid
Oracle Instance alive for sid "+ASM4"
~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019/
~ # $GRID_HOME/OPatch/opatchauto resume

OPatchauto session is initiated at Thu May 16 22:03:09 2019
Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_10-03-10PM.log
Resuming existing session with id K43Y

Starting CRS service on home /u02/app/12.1.0/grid
Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log
CRS service started successfully on home /u02/app/12.1.0/grid


Preparing home /u01/app/oracle/product/12.1.0/db after database service restarted

OPatchauto is running in norestart mode. PDB instances will not be checked for database on the current node.
No step execution required.........
 

Trying to apply SQL patch on home /u01/app/oracle/product/12.1.0/db
SQL patch applied successfully on home /u01/app/oracle/product/12.1.0/db

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:node2
RAC Home:/u01/app/oracle/product/12.1.0/db
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/patches/Jan2019/28833531/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/patches/Jan2019/28833531/28729220
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /u01/software/patches/Jan2019/28833531/28729213
Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28731800
Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log


Host:node2
CRS Home:/u02/app/12.1.0/grid
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/patches/Jan2019/28833531/26983807
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY applied:

Patch: /u01/software/patches/Jan2019/28833531/28729213
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28729220
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28731800
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log


Patching session reported following warning(s): 
_________________________________________________

[WARNING] The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.

[WARNING] The database instances will not be brought up under the 'norestart' option. The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.


OPatchauto session completed at Thu May 16 22:10:01 2019
Time taken to complete the session 6 minutes, 52 seconds
~ # 

Logs:

oracle@node2:/u02/app/12.1.0/grid/cfgtoollogs/crsconfig
> ls -alrt
total 508
drwxr-x--- 2 oracle oinstall   4096 Nov 23 02:15 oracle
-rwxrwxr-x 1 oracle oinstall 167579 Nov 23 02:15 rootcrs_node2_2018-11-23_02-07-58AM.log
drwxrwxr-x 9 oracle oinstall   4096 Apr 10 12:05 ..

opatchauto apply - Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  33020 May 16 20:22 crspatch_node2_2019-05-16_08-21-16PM.log
====================================================================================================

Mysterious log file - Unknown where this log is from because it was not from my terminal output.
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  86983 May 16 21:42 crspatch_node2_2019-05-16_08-27-35PM.log
====================================================================================================

-rwxrwxr-x 1 oracle oinstall  56540 May 16 22:06 srvmcfg1.log
-rwxrwxr-x 1 oracle oinstall  26836 May 16 22:06 srvmcfg2.log
-rwxrwxr-x 1 oracle oinstall  21059 May 16 22:06 srvmcfg3.log
-rwxrwxr-x 1 oracle oinstall  23032 May 16 22:08 srvmcfg4.log

opatchauto resume - Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  64381 May 16 22:09 crspatch_node2_2019-05-16_10-03-17PM.log
====================================================================================================

Prepatch operation log file.

> tail -20 crspatch_node2_2019-05-16_08-21-16PM.log
2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS
2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS '
2019-05-16 20:22:04: Removing file /tmp/fileTChFoS
2019-05-16 20:22:04: Successfully removed file: /tmp/fileTChFoS
2019-05-16 20:22:04: pipe exit code: 0
2019-05-16 20:22:04: /bin/su successfully executed

2019-05-16 20:22:04: checkpoint ROOTCRS_POSTPATCH_OOP_REQSTEPS does not exist
2019-05-16 20:22:04: Done - Performing pre-pathching steps required for GI stack
2019-05-16 20:22:04: Resetting cluutil_trc_suff_pp to 0
2019-05-16 20:22:04: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS"
2019-05-16 20:22:04: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil0.log
2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS
2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS '
2019-05-16 20:22:04: Removing file /tmp/fileDoYyQA
2019-05-16 20:22:04: Successfully removed file: /tmp/fileDoYyQA
2019-05-16 20:22:04: pipe exit code: 0
2019-05-16 20:22:04: /bin/su successfully executed

*** 2019-05-16 20:22:04: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:SUCCESS ***

Mysterious log file – crspatch_node2_2019-05-16_08-27-35PM.log

2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL
2019-05-16 21:42:00: ###### Begin DIE Stack Trace ######
2019-05-16 21:42:00:     Package         File                 Line Calling   
2019-05-16 21:42:00:     --------------- -------------------- ---- ----------
2019-05-16 21:42:00:  1: main            rootcrs.pl            267 crsutils::dietrap
2019-05-16 21:42:00:  2: crsutils        crsutils.pm          1631 main::__ANON__
2019-05-16 21:42:00:  3: crsutils        crsutils.pm          1586 crsutils::system_cmd_capture_noprint
2019-05-16 21:42:00:  4: crsutils        crsutils.pm          9098 crsutils::system_cmd_capture
2019-05-16 21:42:00:  5: crspatch        crspatch.pm           988 crsutils::startFullStack
2019-05-16 21:42:00:  6: crspatch        crspatch.pm          1121 crspatch::performPostPatch
2019-05-16 21:42:00:  7: crspatch        crspatch.pm           212 crspatch::crsPostPatch
2019-05-16 21:42:00:  8: main            rootcrs.pl            276 crspatch::new
2019-05-16 21:42:00: ####### End DIE Stack Trace #######

2019-05-16 21:42:00: ROOTCRS_POSTPATCH checkpoint has failed
2019-05-16 21:42:00:      ckpt: -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil4.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH '
2019-05-16 21:42:00: Removing file /tmp/filewniUim
2019-05-16 21:42:00: Successfully removed file: /tmp/filewniUim
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil5.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status '
2019-05-16 21:42:00: Removing file /tmp/fileK1Tyw6
2019-05-16 21:42:00: Successfully removed file: /tmp/fileK1Tyw6
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: The 'ROOTCRS_POSTPATCH' status is FAILED
2019-05-16 21:42:00: ROOTCRS_POSTPATCH state is FAIL
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil6.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL '
2019-05-16 21:42:00: Removing file /tmp/filej20epR
2019-05-16 21:42:00: Successfully removed file: /tmp/filej20epR
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:FAIL
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL '
2019-05-16 21:42:01: Removing file /tmp/filely834C
2019-05-16 21:42:01: Successfully removed file: /tmp/filely834C
2019-05-16 21:42:01: pipe exit code: 0
2019-05-16 21:42:01: /bin/su successfully executed

*** 2019-05-16 21:42:01: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL ***

Postpatch operation log file.

> tail -20 crspatch_node2_2019-05-16_10-03-17PM.log
2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START"
2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log
2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START
2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START '
2019-05-16 22:09:59: Removing file /tmp/file0IogVl
2019-05-16 22:09:59: Successfully removed file: /tmp/file0IogVl
2019-05-16 22:09:59: pipe exit code: 0
2019-05-16 22:09:59: /bin/su successfully executed

2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:START
2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS"
2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil8.log
2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS
2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS '
2019-05-16 22:09:59: Removing file /tmp/fileXDCkuM
2019-05-16 22:09:59: Successfully removed file: /tmp/fileXDCkuM
2019-05-16 22:09:59: pipe exit code: 0
2019-05-16 22:09:59: /bin/su successfully executed

*** 2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:SUCCESS ***

Happy patching and hopefully patching primary to come will be seamlessly successful.

May 7, 2019

Remove GRID Home After Upgrade

Filed under: 12c,Grid Infrastructure,RAC — mdinh @ 9:53 pm

The environment started with a GRID 12.1.0.1 installation, upgraded to 18.3.0.0, and performed out-of-place patching (OOP) to 18.6.0.0.

As a result, there are three GRID homes and remove 12.1.0.1.

This demonstration will be for the last node from the cluster; however, the action performed will be the same for all nodes.

Review existing patch for Grid and Database homes:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/lspatches.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Notice that the GRID home is /u01/18.3.0.0/grid_2 because this was the suggestion from OOP process.
Based on experience, it might be better to name GRID home with the correct version, i.e. /u01/18.6.0.0/grid

Verify cluster state is [NORMAL]:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/crs_Query.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2056778364].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2056778364] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 29301631 29301643 29302264 ] have been applied on the local node. The release patch string is [18.6.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2056778364].
+ exit
[oracle@racnode-dc1-1 ~]$

Check Oracle Inventory:

[oracle@racnode-dc1-2 ~]$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>

### GRID home (/u01/app/12.1.0.1/grid) to be removed.
========================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
========================================================================================

<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove GRID home (/u01/app/12.1.0.1/grid). Use -local flag to avoid any bug issues.

[oracle@racnode-dc1-2 ~]$ export ORACLE_HOME=/u01/app/12.1.0.1/grid
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16040 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
[oracle@racnode-dc1-2 ~]$

Verify GRID home was removed:

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>

### GRID home (/u01/app/12.1.0.1/grid) removed.
================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1" REMOVED="T"/>
================================================================================

</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove 12.1.0.1 directory:

[oracle@racnode-dc1-2 ~]$ sudo su -
Last login: Thu May  2 23:38:22 CEST 2019
[root@racnode-dc1-2 ~]# cd /u01/app/
[root@racnode-dc1-2 app]# ll
total 12
drwxr-xr-x  3 root   oinstall 4096 Apr 17 23:36 12.1.0.1
drwxrwxr-x 12 oracle oinstall 4096 Apr 30 18:05 oracle
drwxrwx---  5 oracle oinstall 4096 May  2 23:54 oraInventory
[root@racnode-dc1-2 app]# rm -rf 12.1.0.1/
[root@racnode-dc1-2 app]#

Check the cluster:

[root@racnode-dc1-2 app]# logout
[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racnode-dc1-2 ~]$

Later, /u01/18.3.0.0/grid will be removed, too, if there are no issues with the most recent patch.

March 24, 2019

Playing with ACFS

Filed under: 12c,ACFS — mdinh @ 4:50 pm

Kernel version is 4.1.12-94.3.9.el7uek.x86_64 vs ACFS-9325: Driver OS kernel version = 4.1.12-32.el7uek.x86_64 because kernel was upgraded and ACFS has not been reconfigured after kernel upgrade.

[root@racnode-dc1-1 ~]# uname -r
4.1.12-94.3.9.el7uek.x86_64

[root@racnode-dc1-1 ~]# lsmod | grep oracle
oracleacfs           3719168  2
oracleadvm            606208  7
oracleoks             516096  2 oracleacfs,oracleadvm
oracleasm              57344  1

[root@racnode-dc1-1 ~]# modinfo oracleoks
filename:       /lib/modules/4.1.12-94.3.9.el7uek.x86_64/weak-updates/usm/oracleoks.ko
author:         Oracle Corporation
license:        Proprietary
srcversion:     3B8116031A3907D0FFFC8E1
depends:
vermagic:       4.1.12-32.el7uek.x86_64 SMP mod_unload modversions
signer:         Oracle Linux Kernel Module Signing Key
sig_key:        2B:B3:52:41:29:69:A3:65:3F:0E:B6:02:17:63:40:8E:BB:9B:B5:AB
sig_hashalgo:   sha512

[root@racnode-dc1-1 ~]# acfsdriverstate version
ACFS-9325:     Driver OS kernel version = 4.1.12-32.el7uek.x86_64(x86_64).
ACFS-9326:     Driver Oracle version = 181010.

[root@racnode-dc1-1 ~]# acfsdriverstate installed
ACFS-9203: true

[root@racnode-dc1-1 ~]# acfsdriverstate supported
ACFS-9200: Supported

[root@racnode-dc1-1 ~]# acfsroot version_check
ACFS-9316: Valid ADVM/ACFS distribution media detected at: '/u01/app/12.1.0.1/grid/usm/install/Oracle/EL7UEK/x86_64/4.1.12/4.1.12-x86_64/bin'

[root@racnode-dc1-1 ~]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# acfsutil registry
Mount Object:
  Device: /dev/asm/acfs_vol-256
  Mount Point: /ggdata02
  Disk Group: DATA
  Volume: ACFS_VOL
  Options: none
  Nodes: all
[root@racnode-dc1-1 ~]# acfsutil info fs
/ggdata02
    ACFS Version: 12.1.0.2.0
    on-disk version:       39.0
    flags:        MountPoint,Available
    mount time:   Mon Mar 25 16:24:58 2019
    allocation unit:       4096
    volumes:      1
    total size:   10737418240  (  10.00 GB )
    total free:   10569035776  (   9.84 GB )
    file entry table allocation: 49152
    primary volume: /dev/asm/acfs_vol-256
        label:
        state:                 Available
        major, minor:          248, 131073
        size:                  10737418240  (  10.00 GB )
        free:                  10569035776  (   9.84 GB )
        metadata read I/O count:         1087
        metadata write I/O count:        11
        total metadata bytes read:       556544  ( 543.50 KB )
        total metadata bytes written:    12800  (  12.50 KB )
        ADVM diskgroup         DATA
        ADVM resize increment: 536870912
        ADVM redundancy:       unprotected
        ADVM stripe columns:   8
        ADVM stripe width:     1048576
    number of snapshots:  0
    snapshot space usage: 0  ( 0.00 )
    replication status: DISABLED
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ cluvfy comp acfs -n all -f /ggdata02 -verbose

Verifying ACFS Integrity
Task ASM Integrity check started...


Starting check to see if ASM is running on all cluster nodes...

ASM Running check passed. ASM is running on all specified nodes

Confirming that at least one ASM disk group is configured...
Disk Group Check passed. At least one Disk Group configured

Task ASM Integrity check passed...

Task ACFS Integrity check started...

Checking shared storage accessibility...

"/ggdata02" is shared


Shared storage check was successful on nodes "racnode-dc1-1,racnode-dc1-2"

Task ACFS Integrity check passed

UDev attributes check for ACFS started...
Result: UDev attributes check passed for ACFS


Verification of ACFS Integrity was successful.
[oracle@racnode-dc1-1 ~]$

Gather ACFS Volume Info:

[oracle@racnode-dc1-1 ~]$ asmcmd volinfo –all

Diskgroup Name: DATA

         Volume Name: ACFS_VOL
         Volume Device: /dev/asm/acfs_vol-256
         State: ENABLED
         Size (MB): 10240
         Resize Unit (MB): 512
         Redundancy: UNPROT
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /ggdata02

Gather ACFS info using resource name:

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init

NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1

From (asmcmd volinfo –all): Diskgroup Name: DATA 

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.dg -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------

From (asmcmd volinfo –all): Diskgroup Name: DATA and Volume Name: ACFS_VOL

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.ACFS_VOL.advm -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.ACFS_VOL.advm
               ONLINE  ONLINE       racnode-dc1-1            Volume device /dev/asm/acfs_vol-256 
                                                             is online,STABLE
               ONLINE  ONLINE       racnode-dc1-2            Volume device /dev/asm/acfs_vol-256 
                                                             is online,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.ACFS_VOL.acfs -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.data.acfs_vol.acfs
               ONLINE  ONLINE       racnode-dc1-1            mounted on /ggdata02,STABLE
               ONLINE  ONLINE       racnode-dc1-2            mounted on /ggdata02,STABLE
--------------------------------------------------------------------------------

Gather ACFS info using resource type:

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.volume.type’

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.ACFS_VOL.advm
               ONLINE  ONLINE       racnode-dc1-1            Volume device /dev/asm/acfs_vol-256 
			                                                 is online,STABLE
               ONLINE  ONLINE       racnode-dc1-2            Volume device /dev/asm/acfs_vol-256 
			                                                 is online,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.acfs.type’

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.data.acfs_vol.acfs
               ONLINE  ONLINE       racnode-dc1-1            mounted on /ggdata02,STABLE
               ONLINE  ONLINE       racnode-dc1-2            mounted on /ggdata02,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.diskgroup.type’

--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.DATA.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.FRA.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
--------------------------------------------------------------------------------

March 16, 2019

Playing with oracleasm and ASMLib

Filed under: 12c,ASM,awk_sed_grep — mdinh @ 12:02 am

Forgot about script I wrote some time ago: Be Friend With awk/sed | ASM Mapping

[root@racnode-dc1-1 ~]# cat /sf_working/scripts/asm_mapping.sh
#!/bin/sh -e
for disk in `/etc/init.d/oracleasm listdisks`
do
oracleasm querydisk -d $disk
#ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
# Alternate option to remove []
ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed 's/[][]//g'|awk -F, '{print $1 ",.*" $2}'`
echo
done

[root@racnode-dc1-1 ~]# /sf_working/scripts/asm_mapping.sh
Disk "CRS01" is a valid ASM disk on device [8,33]
brw-rw---- 1 root    disk      8,  33 Mar 16 10:25 /dev/sdc1

Disk "DATA01" is a valid ASM disk on device [8,49]
brw-rw---- 1 root    disk      8,  49 Mar 16 10:25 /dev/sdd1

Disk "FRA01" is a valid ASM disk on device [8,65]
brw-rw---- 1 root    disk      8,  65 Mar 16 10:25 /dev/sde1

[root@racnode-dc1-1 ~]#

HOWTO: Which Disks Are Handled by ASMLib Kernel Driver? (Doc ID 313387.1)

[root@racnode-dc1-1 ~]# oracleasm listdisks
CRS01
DATA01
FRA01

[root@racnode-dc1-1 dev]# ls -l /dev/oracleasm/disks
total 0
brw-rw---- 1 oracle dba 8, 33 Mar 15 10:46 CRS01
brw-rw---- 1 oracle dba 8, 49 Mar 15 10:46 DATA01
brw-rw---- 1 oracle dba 8, 65 Mar 15 10:46 FRA01

[root@racnode-dc1-1 dev]# ls -l /dev | grep -E '33|49|65'|grep -E '8'
brw-rw---- 1 root    disk      8,  33 Mar 15 23:47 sdc1
brw-rw---- 1 root    disk      8,  49 Mar 15 23:47 sdd1
brw-rw---- 1 root    disk      8,  65 Mar 15 23:47 sde1

[root@racnode-dc1-1 dev]# /sbin/blkid | grep oracleasm
/dev/sde1: LABEL="FRA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="205115d9-730d-4f64-aedd-d3886e73d123"
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"
/dev/sdc1: LABEL="CRS01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="232e214d-07bb-4f36-aba8-fb215437fb7e"
[root@racnode-dc1-1 dev]#

Various commands to retrieved oracleasm info and more.

[root@racnode-dc1-1 ~]# cat /etc/oracle-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# cat /etc/system-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# uname -r
4.1.12-61.1.18.el7uek.x86_64

[root@racnode-dc1-1 ~]# rpm -q oracleasm-`uname -r`
package oracleasm-4.1.12-61.1.18.el7uek.x86_64 is not installed

[root@racnode-dc1-1 ~]# rpm -qa |grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# oracleasm -V
oracleasm version 2.1.9

[root@racnode-dc1-1 ~]# oracleasm -h
Usage: oracleasm [--exec-path=<exec_path>] <command> [ <args> ]
       oracleasm --exec-path
       oracleasm -h
       oracleasm -V

The basic oracleasm commands are:
    configure        Configure the Oracle Linux ASMLib driver
    init             Load and initialize the ASMLib driver
    exit             Stop the ASMLib driver
    scandisks        Scan the system for Oracle ASMLib disks
    status           Display the status of the Oracle ASMLib driver
    listdisks        List known Oracle ASMLib disks
    querydisk        Determine if a disk belongs to Oracle ASMlib
    createdisk       Allocate a device for Oracle ASMLib use
    deletedisk       Return a device to the operating system
    renamedisk       Change the label of an Oracle ASMlib disk
    update-driver    Download the latest ASMLib driver

[root@racnode-dc1-1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oracle]:
Default group to own the driver interface [dba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

[root@racnode-dc1-1 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=dba
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

[root@racnode-dc1-1 ~]# cat /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#

# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER=""

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""

# ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
# reported by the underlying disk instead of the physical. The default
# is 'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false

[root@racnode-dc1-1 ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

[root@racnode-dc1-1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@racnode-dc1-1 ~]# oracleasm querydisk -d DATA01
Disk "DATA01" is a valid ASM disk on device [8,49]

[root@racnode-dc1-1 ~]# oracleasm querydisk -p DATA01
Disk "DATA01" is a valid ASM disk
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"

[root@racnode-dc1-1 ~]# oracleasm-discover
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]
Discovered disk: ORCL:CRS01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:DATA01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:FRA01 [104853504 blocks (53684994048 bytes), maxio 1024]

[root@racnode-dc1-1 ~]# lsmod | grep oracleasm
oracleasm              57344  1

[root@racnode-dc1-1 ~]# modinfo oracleasm
filename:       /lib/modules/4.1.12-61.1.18.el7uek.x86_64/kernel/drivers/block/oracleasm/oracleasm.ko
description:    Kernel driver backing the Generic Linux ASM Library.
author:         Joel Becker, Martin K. Petersen <martin.petersen@oracle.com>
version:        2.0.8
license:        GPL
srcversion:     4B3524FDA590726E8D378CB
depends:
intree:         Y
vermagic:       4.1.12-61.1.18.el7uek.x86_64 SMP mod_unload modversions
signer:         Oracle CA Server
sig_key:        AC:74:F5:41:96:B5:9D:EB:61:BA:02:F9:C2:02:8C:9C:E5:94:53:06
sig_hashalgo:   sha512
parm:           use_logical_block_size:Prefer logical block size over physical (Y=logical, N=physical [default]) (bool)

[root@racnode-dc1-1 ~]# ls -la /etc/sysconfig/oracleasm
lrwxrwxrwx 1 root root 24 Mar  5 20:21 /etc/sysconfig/oracleasm -> oracleasm-_dev_oracleasm

[root@racnode-dc1-1 ~]# rpm -qa | grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# rpm -qi oracleasmlib-2.0.4-1.el6.x86_64
Name        : oracleasmlib
Version     : 2.0.4
Release     : 1.el6
Architecture: x86_64
Install Date: Tue 18 Apr 2017 10:56:40 AM CEST
Group       : System Environment/Kernel
Size        : 27192
License     : Oracle Corporation
Signature   : RSA/SHA256, Mon 26 Mar 2012 10:22:51 PM CEST, Key ID 72f97b74ec551f03
Source RPM  : oracleasmlib-2.0.4-1.el6.src.rpm
Build Date  : Mon 26 Mar 2012 10:22:44 PM CEST
Build Host  : ca-build44.us.oracle.com
Relocations : (not relocatable)
Packager    : Joel Becker <joel.becker@oracle.com>
Vendor      : Oracle Corporation
URL         : http://oss.oracle.com/
Summary     : The Oracle Automatic Storage Management library userspace code.
Description :
The Oracle userspace library for Oracle Automatic Storage Management
[root@racnode-dc1-1 ~]#

References for ASMLib

Do you need asmlib?

Oracleasmlib Not Necessary

March 4, 2019

Thank You ALL

Filed under: 12c,BUG — mdinh @ 1:43 pm

Oracle like like a box of chocolate, you never know what you are going to get. (Reference: Forrest Gump movie)

After spending countless hours over weekend, I am reminded of quote, “Curiosity killed the cat, but satisfaction brought it back.”

Basically, I have been unsuccessful in rebuilding 12.1.0.1 RAC VM to test and validate another Upgrade BUG

The finding looks to match – root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3

Thank you to all who share their experiences !!!

==============================================================================================================
+++ FAILED STEP: TASK [oraswgi-install : Run root script after installation (Other Nodes)] ******
==============================================================================================================

Line 771: failed: [racnode-dc1-2] /u01/app/12.1.0.1/grid/root.sh", ["Check /u01/app/12.1.0.1/grid/install/root_racnode-dc1-2_2019-03-04_05-17-39.log for the output of root script"]
TASK [oraswgi-install : Run root script after installation (Other Nodes)] ******


[oracle@racnode-dc1-2 ~]$ cat /u01/app/12.1.0.1/grid/install/root_racnode-dc1-2_2019-03-04_05-17-39.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.1/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
2019/03/04 05:17:39 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2019/03/04 05:18:06 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2019/03/04 05:18:07 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
&lt;span style="color:#ff0000;"&gt;&lt;strong&gt;2019/03/04 05:18:44 CLSRSC-507: The root script cannot proceed on this node racnode-dc1-2 because either the first-node operations have not completed on node racnode-dc1-1 or there was an error in obtaining the status of the first-node operations.
&lt;/strong&gt;&lt;/span&gt;
Died at /u01/app/12.1.0.1/grid/crs/install/crsutils.pm line 3681.
The command '/u01/app/12.1.0.1/grid/perl/bin/perl -I/u01/app/12.1.0.1/grid/perl/lib -I/u01/app/12.1.0.1/grid/crs/install /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl ' execution failed
[oracle@racnode-dc1-2 ~]$

[oracle@racnode-dc1-2 ~]$ tail /etc/oracle-release
&lt;strong&gt;&lt;span style="color:#ff0000;"&gt;Oracle Linux Server release 7.3
&lt;/span&gt;&lt;/strong&gt;[oracle@racnode-dc1-2 ~]$

[root@racnode-dc1-1 ~]# crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
&lt;span style="color:#ff0000;"&gt;&lt;strong&gt;Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
&lt;/strong&gt;&lt;/span&gt;[root@racnode-dc1-1 ~]#

==============================================================================================================
+++ CLSRSC-507: The root script cannot proceed on this node NODE2 because either the first-node operations have not completed on node NODE1 or there was an error in obtaining the status of the first-node operations.
==============================================================================================================

https://community.oracle.com/docs/DOC-1011444

Created by 3389670 on Feb 24, 2017 6:41 PM. Last modified by 3389670 on Feb 24, 2017 6:41 PM.
Visibility: Open to anyone

### Problem Summary 
--------------------------------------------------- 
root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3

### Problem Description 
--------------------------------------------------- 
root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3 
OLR initialization - successful 
2017/02/23 05:28:25 CLSRSC-507: The root script cannot proceed on this node NODE2 because either the first-node operations have not completed on node NODE1 or there was an error in obtaining the status of the first-node operations.

Died at /u01/app/12.1.0.2/grid/crs/install/crsutils.pm line 3681. 
The command '/u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2/grid/crs/install/roo

### Error Codes 
--------------------------------------------------- 
CLSRSC-507

### Problem Category/Subcategory 
--------------------------------------------------- 
Database RAC / Grid Infrastructure / Clusterware/Install / Upgrade / Patching issues


### Solution 
---------------------------------------------------

1. Download latest JAN 2017 PSU 12.1.0.2.170117 (Jan 2017) Grid Infrastructure Patch Set Update (GI PSU) - 24917825

https://updates.oracle.com/download/24917825.html 

Platform or Language Linux86-64

2. Unzip downloaded patch as GRID user to directory

unzip p24917825_121020_Linux-x86-64.zip -d 

3. Run deconfig on both nodes

In the 2nd node as root user, 

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

In the 1st node as root user, 
/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force -lastnode

4. Once deconfig is completed then move forward on applying patching on both nodes in GRID Home

5. Move to unzip patch directory and apply patch using opatch manual

In 1st node, as grid user

cd /24917825/24732082 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828633 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828643 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/21436941 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

In 2nd node, as grid user

cd /24917825/24732082 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828633 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828643 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/21436941 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

6. Once Patch apply is completed on both node then move forward on invoking config.sh  

/u01/app/12.1.0.2/grid/crs/config/config.sh

or run root.sh directly on node1 and node2

/u01/app/12.1.0.2/grid/root.sh

February 28, 2019

ORA-17503: ksfdopn:10 Failed to open spfile

Filed under: 12c,oracle — mdinh @ 7:21 pm
[oracle@racnode-dc1-2 dbs]$ srvctl start database -d hawk
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/HAWK/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/HAWK/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/HAWK/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/HAWK/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy

Starting from sqlplus did not help either and why does it even matter?

[oracle@racnode-dc1-2 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Feb 28 18:11:58 2019

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

18:11:59 SYS @ hawk2:>startup;
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/HAWK/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/HAWK/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
18:12:01 SYS @ hawk2:>exit
Disconnected
[oracle@racnode-dc1-2 dbs]$

SRVCTL Fails to Start Instance with ORA-17503 ORA-27303 But sqlplus Startup is Fine [1322959.1]

Even though the situation did not match support note, the solution provided did work

[oracle@racnode-dc1-2 dbs]$ id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54318(asmdba),54322(dba),54323(backupdba),54324(oper),54325(dgdba),54326(kmdba)

[oracle@racnode-dc1-2 dbs]$ ls -l $GRID_HOME/bin/oracle
-rwxrwxr-x 1 oracle oinstall 292020952 Feb 27 22:57 /u01/app/12.1.0.1/grid/bin/oracle

[oracle@racnode-dc1-2 dbs]$ chmod 6751 $GRID_HOME/bin/oracle
[oracle@racnode-dc1-2 dbs]$ ls -l $GRID_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 292020952 Feb 27 22:57 /u01/app/12.1.0.1/grid/bin/oracle

[oracle@racnode-dc1-2 dbs]$ ls -l $ORACLE_HOME/bin/oracle
-rwxrwsr-x 1 oracle dba 324409192 Feb 27 22:51 /u01/app/oracle/12.1.0.1/db1/bin/oracle
[oracle@racnode-dc1-2 dbs]$

==========================================================================================

[oracle@racnode-dc1-1 dbs]$ ls -l $GRID_HOME/bin/oracle
-rwxrwxr-x 1 oracle oinstall 292020952 Feb 27 21:41 /u01/app/12.1.0.1/grid/bin/oracle

[oracle@racnode-dc1-1 dbs]$ chmod 6751 $GRID_HOME/bin/oracle
[oracle@racnode-dc1-1 dbs]$ ls -l $GRID_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 292020952 Feb 27 21:41 /u01/app/12.1.0.1/grid/bin/oracle

[oracle@racnode-dc1-1 dbs]$ ls -l $ORACLE_HOME/bin/oracle
-rwxrwsr-x 1 oracle dba 324409192 Feb 27 21:35 /u01/app/oracle/12.1.0.1/db1/bin/oracle
[oracle@racnode-dc1-1 dbs]$

[oracle@racnode-dc1-1 dbs]$ srvctl start database -d hawk
[oracle@racnode-dc1-1 dbs]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.
[oracle@racnode-dc1-1 dbs]$

[oracle@racnode-dc1-1 dbs]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.asm
               ONLINE  ONLINE       racnode-dc1-1            Started,STABLE
               ONLINE  ONLINE       racnode-dc1-2            Started,STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-1            169.254.203.248 172.
                                                             16.9.10,STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-1 dbs]$

February 23, 2019

Sed’ing Through ora.cvu Hell

Filed under: 12c,awk_sed_grep,Grid Infrastructure — mdinh @ 12:02 pm

Don’t know why I always look for trouble.

The trouble found was CHECK_RESULTS from ora.cvu.type had many issues which look to be BUGS related.

Here is the RAC environment from VM.

[oracle@racnode-dc1-1 ~]$ cat /etc/system-release
Oracle Linux Server release 7.3
[oracle@racnode-dc1-1 ~]$

[oracle@racnode-dc1-1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [0] and no patches have been applied on the local node.

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [0].

[oracle@racnode-dc1-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
racnode-dc1-2
racnode-dc1-1
PRVF-5415 : Check to see if NTP daemon or service is running failed
PRVF-7573 : Sufficient swap size is not available on node "racnode-dc1-2" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB (2097148.0KB)]
PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm
PRCW-1015 : Wallet hawk does not exist.
CLSW-9: The cluster wallet to be operated on does not exist. :[1015]
PRVF-7573 : Sufficient swap size is not available on node "racnode-dc1-1" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB (2097148.0KB)]
PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm
PRCW-1015 : Wallet hawk does not exist.
CLSW-9: The cluster wallet to be operated on does not exist. :[1015]
[oracle@racnode-dc1-1 ~]$

BUGS?

Linux OL7/RHEL7: PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm (Doc ID 2065603.1)

Bug 24696235 – cvu check results shows errors PRCW-1015 and CLSW-9 (Doc ID 24696235.8)

[root@racnode-dc1-1 ~]# ocrdump
[root@racnode-dc1-1 ~]# cat OCRDUMPFILE |grep -i SYSTEM.WALLET
[SYSTEM.WALLET]
[SYSTEM.WALLET.APPQOSADMIN]
[SYSTEM.WALLET.MGMTDB]
[root@racnode-dc1-1 ~]#

There’s is indeed no wallet for database hawk. But if wallet is created, will only result in another bug?

cluvfy:PRCQ-1000 : An error occurred while establishing connection to database with user name “DBSNMP” (Doc ID 2288958.1)

PRCQ-1000 : An error occurred while establishing connection to database with user name "DBSNMP" and connect descriptor:
ORA-01017: invalid username/password; logon denied

Cluster Verification Utility (CVU) Check Fails With NTP Configuration (Doc ID 2162408.1)

Some Good References:

Slimming Down Oracle RAC 12c’s Resource Footprint

Oracle Grid Infrastructure: change the interval for the Cluster Verification Utility (ora.cvu)

Small Notes on Clusterware resource ora.cvu

February 20, 2019

opatchauto is not that dumb

Filed under: 12c,PSU,RAC — mdinh @ 11:41 pm

I find it ironic that we want to automate yet fear automation.

Per documentation, ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1),
the following patch is required to implement ACFS:

p22810422_12102160419forACFS_Linux-x86-64.zip
Patch for Bug# 22810422
UEKR4 SUPPORT FOR ACFS(Patch 22810422)

I had inquired why opatchauto is not used to patch the entire system versus manual patching for GI ONLY.

To patch GI home and all Oracle RAC database homes of the same version:
# opatchauto apply _UNZIPPED_PATCH_LOCATION_/22810422 -ocmrf _ocm response file_

OCM is not included in OPatch binaries since OPatch version 12.2.0.1.5; therefore, -ocmrf is not needed.
Reason for GI only patching is simply because we only need to do it to enable ACFS support.

Rationale makes sense and typically ACFS is only applied to GI home.

Being curious, shouldn’t opatchauto know what homes to apply the patch where applicable?
Wouldn’t be easier to execute opatchauto versus performing manual steps?

What do you think and which approach would you use?

Here are the results from applying patch 22810422.


[oracle@racnode-dc1-2 22810422]$ pwd
/sf_OracleSoftware/22810422

[oracle@racnode-dc1-2 22810422]$ sudo su -
Last login: Wed Feb 20 23:13:52 CET 2019 on pts/0

[root@racnode-dc1-2 ~]# . /media/patch/gi.env
ORACLE_SID = [root] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"

[root@racnode-dc1-2 ~]# export PATH=$PATH:$GRID_HOME/OPatch

[root@racnode-dc1-2 ~]# opatchauto apply /sf_OracleSoftware/22810422 -analyze

OPatchauto session is initiated at Wed Feb 20 23:21:46 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-21-53PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-22-12PM.log
The id for this session is YBG6

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racnode-dc1-2
RAC Home:/u01/app/oracle/12.1.0.1/db1


==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-2
CRS Home:/u01/app/12.1.0.1/grid


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-22-24PM_1.log



OPatchauto session completed at Wed Feb 20 23:24:53 2019
Time taken to complete the session 3 minutes, 7 seconds


[root@racnode-dc1-2 ~]# opatchauto apply /sf_OracleSoftware/22810422

OPatchauto session is initiated at Wed Feb 20 23:25:12 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-25-19PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-25-38PM.log
The id for this session is 3BYS

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1


Preparing to bring down database service on home /u01/app/oracle/12.1.0.1/db1
Successfully prepared home /u01/app/oracle/12.1.0.1/db1 to bring down database service


Bringing down CRS service on home /u01/app/12.1.0.1/grid
Prepatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-2_2019-02-20_11-28-00PM.log
CRS service brought down successfully on home /u01/app/12.1.0.1/grid


Start applying binary patch on home /u01/app/12.1.0.1/grid
Binary patch applied successfully on home /u01/app/12.1.0.1/grid


Starting CRS service on home /u01/app/12.1.0.1/grid
Postpatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-2_2019-02-20_11-36-59PM.log
CRS service started successfully on home /u01/app/12.1.0.1/grid


Preparing home /u01/app/oracle/12.1.0.1/db1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/12.1.0.1/db1 successfully after database service restarted

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-2
RAC Home:/u01/app/oracle/12.1.0.1/db1
Summary:

==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-2
CRS Home:/u01/app/12.1.0.1/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-30-39PM_1.log



OPatchauto session completed at Wed Feb 20 23:40:10 2019
Time taken to complete the session 14 minutes, 58 seconds
[root@racnode-dc1-2 ~]#

[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
ORACLE_SID = [hawk2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
22810422;ACFS Interim patch for 22810422

OPatch succeeded.

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/12.1.0.1/db1".

OPatch succeeded.
[oracle@racnode-dc1-2 ~]$

====================================================================================================

### Checking resources while patching racnode-dc1-1
[oracle@racnode-dc1-2 ~]$ crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE                               STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  INTERMEDIATE racnode-dc1-2            FAILED OVER,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.asm
               ONLINE  ONLINE       racnode-dc1-2            Started,STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-2            169.254.178.60 172.1
                                                             6.9.11,STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  INTERMEDIATE racnode-dc1-2            FAILED OVER,STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-2 ~]$


====================================================================================================

[oracle@racnode-dc1-1 ~]$ sudo su -
Last login: Wed Feb 20 23:02:19 CET 2019 on pts/0

[root@racnode-dc1-1 ~]# . /media/patch/gi.env
ORACLE_SID = [root] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"

[root@racnode-dc1-1 ~]# export PATH=$PATH:$GRID_HOME/OPatch

[root@racnode-dc1-1 ~]# opatchauto apply /sf_OracleSoftware/22810422 -analyze

OPatchauto session is initiated at Wed Feb 20 23:43:46 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-43-54PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-44-12PM.log
The id for this session is M9KF

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racnode-dc1-1
RAC Home:/u01/app/oracle/12.1.0.1/db1


==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-1
CRS Home:/u01/app/12.1.0.1/grid


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-44-26PM_1.log



OPatchauto session completed at Wed Feb 20 23:46:31 2019
Time taken to complete the session 2 minutes, 45 seconds

[root@racnode-dc1-1 ~]# opatchauto apply /sf_OracleSoftware/22810422

OPatchauto session is initiated at Wed Feb 20 23:47:13 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-47-20PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-47-38PM.log
The id for this session is RHMR

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1


Preparing to bring down database service on home /u01/app/oracle/12.1.0.1/db1
Successfully prepared home /u01/app/oracle/12.1.0.1/db1 to bring down database service


Bringing down CRS service on home /u01/app/12.1.0.1/grid
Prepatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-1_2019-02-20_11-50-01PM.log
CRS service brought down successfully on home /u01/app/12.1.0.1/grid


Start applying binary patch on home /u01/app/12.1.0.1/grid
Binary patch applied successfully on home /u01/app/12.1.0.1/grid


Starting CRS service on home /u01/app/12.1.0.1/grid
Postpatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-1_2019-02-20_11-58-57PM.log
CRS service started successfully on home /u01/app/12.1.0.1/grid


Preparing home /u01/app/oracle/12.1.0.1/db1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/12.1.0.1/db1 successfully after database service restarted

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-1
RAC Home:/u01/app/oracle/12.1.0.1/db1
Summary:

==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-1
CRS Home:/u01/app/12.1.0.1/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-52-37PM_1.log



OPatchauto session completed at Thu Feb 21 00:01:15 2019
Time taken to complete the session 14 minutes, 3 seconds

[root@racnode-dc1-1 ~]# logout

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
ORACLE_SID = [hawk1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
22810422;ACFS Interim patch for 22810422

OPatch succeeded.

[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/12.1.0.1/db1".

OPatch succeeded.
[oracle@racnode-dc1-1 ~]$

December 21, 2018

Find Database Growth Using OEM Repository

Filed under: 12.2,oracle — mdinh @ 2:20 pm

Typically, what as been done is to schedule job for each database to collect database growth.

This may be problematic as it can be forgotten when new databases are created versus the likelihood of forgetting to add database to monitoring for OEM.

EM12c, EM13c : Querying the Repository Database for Building Reports using Metric Information (Doc ID 2347253.1)

Those raw data are inserted in various tables like EM_METRIC_VALUES for example. 
EM aggregates those management data by hour and by day. 
Those raw data are kept 7 days; the one hour aggregated data are kept 31 days, while one day aggregated data are kept one year.

How to obtain the Historical Database Total Used and Allocated Size from OEM Repository

The above blog post provided a good starting point.

This post is using query to collect database size (metric_name=’DATABASE_SIZE’) vs tablespace size (metric_name=’tbspAllocation’) to avoid having to sum all tablespaces to determine database size.

OMS: 13.2.0 and EMREP DB: 12.2.0

Comparison for METRIC_COLUMN between DATABASE_SIZE and tbspAllocation.

For tbspAllocation, the size was not clear and did not research further but it does appear to be GB.

SQL> select distinct metric_name, METRIC_COLUMN from sysman.mgmt$metric_daily where metric_name='tbspAllocation' order by 1;

METRIC_NAME                                                      METRIC_COLUMN
---------------------------------------------------------------- ----------------------------------------------------------------
tbspAllocation                                                   spaceUsed
tbspAllocation                                                   spaceAllocated

SQL> select distinct METRIC_COLUMN from sysman.mgmt$metric_daily WHERE metric_name='DATABASE_SIZE';

METRIC_COLUMN
----------------------------------------------------------------
ALLOCATED_GB
USED_GB

TARGET_TYPE used (not all results presented):

SQL> select distinct target_type from sysman.mgmt$metric_daily order by 1;

TARGET_TYPE
----------------------------------------------------------------
oracle_database
oracle_pdb
rac_database

METRIC_NAME used (not all results presented):

SQL> select distinct metric_name from sysman.mgmt$metric_daily order by 1;

METRIC_NAME
----------------------------------------------------------------
DATABASE_SIZE
tbspAllocation

DEMO:

SQL> @dbsize.sql
SQL> -- Michael Dinh : Dec 20, 2018
SQL> set echo off
Enter value for 1: perf

TARGET_NAME                                        TARGET_TYPE     MONTH_DT  USED_GB ALLOCATED_GB PREVIOUS_MONTH DIFF_USED_GB
-------------------------------------------------- --------------- --------- ------- ------------ -------------- ------------
xxxxperf                                           rac_database    01-MAR-18  2698.6       3526.8
                                                   rac_database    01-APR-18  2709.9       3526.8         2698.6        11.31
                                                   rac_database    01-MAY-18  2728.8       3526.8         2709.9        18.86
                                                   rac_database    01-JUN-18  2735.4       3548.8         2728.8         6.61
                                                   rac_database    01-JUL-18  2746.4       3548.8         2735.4        11.01
                                                   rac_database    01-AUG-18  2758.7       3548.8         2746.4        12.27
                                                   rac_database    01-SEP-18  2772.5       3548.8         2758.7        13.82
                                                   rac_database    01-OCT-18  4888.8       6207.8         2772.5       2116.3
                                                   rac_database    01-NOV-18  4647.8       6207.8         4888.8         -241
                                                   rac_database    01-DEC-18  3383.2       6207.8         4647.8        -1265
yyyyperf                                           oracle_database 01-MAR-18   63.07       395.58
                                                   oracle_database 01-APR-18   63.19       395.58          63.07          .12
                                                   oracle_database 01-MAY-18   64.33       395.58          63.19         1.14
                                                   oracle_database 01-JUN-18   64.81       395.58          64.33          .48
                                                   oracle_database 01-JUL-18    65.1       395.58          64.81          .29
                                                   oracle_database 01-AUG-18   65.22       395.58           65.1          .12
                                                   oracle_database 01-SEP-18   65.79       395.58          65.22          .57
                                                   oracle_database 01-OCT-18   68.18       395.58          65.79         2.39
                                                   oracle_database 01-NOV-18   75.79       395.72          68.18         7.61
                                                   oracle_database 01-DEC-18    80.4       395.72          75.79         4.61

29 rows selected.

SQL> @dbsize
SQL> -- Michael Dinh : Dec 20, 2018
SQL> set echo off
Enter value for 1: *

TARGET_NAME                                        TARGET_TYPE     MONTH_DT  USED_GB ALLOCATED_GB PREVIOUS_MONTH DIFF_USED_GB
-------------------------------------------------- --------------- --------- ------- ------------ -------------- ------------
CDByyyy_xxxxxxxxxxxxxxxxxxxxxxxxxx_CDBROOT         oracle_pdb      01-MAR-18    7.96        94.73
                                                   oracle_pdb      01-APR-18    3.44        94.73           7.96        -4.52
                                                   oracle_pdb      01-MAY-18   12.26        95.07           3.44         8.82
                                                   oracle_pdb      01-JUN-18   76.18        95.12          12.26        63.92
                                                   oracle_pdb      01-JUL-18   70.87        95.15          76.18        -5.31
                                                   oracle_pdb      01-AUG-18   77.63        95.15          70.87         6.76
                                                   oracle_pdb      01-SEP-18     4.9        95.15          77.63       -72.73
                                                   oracle_pdb      01-OCT-18       4        95.15            4.9          -.9
                                                   oracle_pdb      01-NOV-18   41.34        95.15              4        37.34
                                                   oracle_pdb      01-DEC-18   33.52        95.15          41.34        -7.82
CDByyyy_xxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxPDB        oracle_pdb      01-MAR-18  1610.6         2571
                                                   oracle_pdb      01-APR-18  1644.9         2571         1610.6        34.27
                                                   oracle_pdb      01-MAY-18  1659.3       2571.3         1644.9        14.43
                                                   oracle_pdb      01-JUN-18  1694.7       2571.4         1659.3        35.32
                                                   oracle_pdb      01-JUL-18  1753.8       2571.4         1694.7        59.18
                                                   oracle_pdb      01-AUG-18  1827.9       2571.4         1753.8        74.06
                                                   oracle_pdb      01-SEP-18  1900.8       2571.4         1827.9        72.91
                                                   oracle_pdb      01-OCT-18  1977.2       2571.4         1900.8        76.43
                                                   oracle_pdb      01-NOV-18  2044.8       2571.4         1977.2         67.6
                                                   oracle_pdb      01-DEC-18  2144.5       2571.4         2044.8        99.64

Script:

set line 200 verify off trimspool off tab off pages 1000 numw 6 echo on
-- Michael Dinh : Dec 20, 2018
set echo off
/*
How to obtain the Historical Database Total Used and Allocated Size from OEM Repository
How to obtain the Historical Database Total Used and Allocated Size from OEM Repository
*/
col target_name for a50
col target_type for a15
undefine 1
break on target_name
WITH dbsz AS (
SELECT
target_name, target_type, month_dt,
SUM(DECODE(metric_column, 'USED_GB', maximum)) used_gb,
SUM(DECODE(metric_column, 'ALLOCATED_GB', maximum)) allocated_gb
FROM (
SELECT target_name, target_type, trunc(rollup_timestamp,'MONTH') month_dt, metric_column, MAX(maximum) maximum
FROM sysman.mgmt$metric_daily
WHERE target_type IN ('rac_database','oracle_database','oracle_pdb')
AND metric_name = 'DATABASE_SIZE'
AND metric_column IN ('ALLOCATED_GB','USED_GB')
AND REGEXP_LIKE(target_name,'&&1','i')
GROUP BY target_name, target_type, trunc(rollup_timestamp,'MONTH'), metric_column
)
GROUP BY target_name, target_type, month_dt
ORDER BY target_name, month_dt
)
SELECT target_name, target_type, month_dt, used_gb, allocated_gb,
LAG(used_gb,1) OVER (PARTITION BY target_name ORDER BY target_name) previous_month,
used_gb-LAG(used_gb,1) OVER (PARTITION BY target_name ORDER BY target_name) diff_used_gb
FROM dbsz
ORDER BY target_name, month_dt
;

UPDATED SQL SCRIPT:

SQL> @dbsize.sql xxxprod
SQL> -- Michael Dinh : Dec 20, 2018
SQL> set echo off

TARGET_NAME                                        TARGET_TYPE     MONTH_DT  USED_GB ALLOCATED_GB PREVIOUS_MONTH DIFF_USED_GB
-------------------------------------------------- --------------- --------- ------- ------------ -------------- ------------
xxxprod                                            rac_database    31-MAR-18   333.2       704.42
                                                   rac_database    30-APR-18  336.65       704.42          333.2         3.45
                                                   rac_database    31-MAY-18  350.48       704.42         336.65        13.83
                                                   rac_database    30-JUN-18  423.47        714.1         350.48        72.99
                                                   rac_database    31-JUL-18  397.42        714.1         423.47       -26.05
                                                   rac_database    31-AUG-18  415.61        714.1         397.42        18.19
                                                   rac_database    30-SEP-18   417.2       714.69         415.61         1.59
                                                   rac_database    31-OCT-18  421.04       714.69          417.2         3.84
                                                   rac_database    30-NOV-18  425.35       715.37         421.04         4.31
                                                   rac_database    20-DEC-18  428.44       723.11         425.35         3.09

10 rows selected.

SQL> !cat dbsize.sql



SQL> !cat dbsize.sql
set line 200 verify off trimspool off tab off pages 1000 numw 6 echo on
-- Michael Dinh : Dec 20, 2018
set echo off
/*
How to obtain the Historical Database Total Used and Allocated Size from OEM Repository
How to obtain the Historical Database Total Used and Allocated Size from OEM Repository
*/
col target_name for a50
col target_type for a15
break on target_name
WITH dbsz AS (
SELECT
target_name, target_type, month_dt,
SUM(DECODE(metric_column, 'USED_GB', maximum)) used_gb,
SUM(DECODE(metric_column, 'ALLOCATED_GB', maximum)) allocated_gb
FROM (
-- This shows LATEST date of month
SELECT target_name, target_type, MAX(rollup_timestamp) month_dt, metric_column, MAX(maximum) maximum
-- This shows FIRST date of month
-- SELECT target_name, target_type, TRUNC(rollup_timestamp,'MONTH') month_dt, metric_column, MAX(maximum) maximum
FROM sysman.mgmt$metric_daily
WHERE target_type IN ('rac_database','oracle_database','oracle_pdb')
AND metric_name = 'DATABASE_SIZE'
AND metric_column IN ('ALLOCATED_GB','USED_GB')
AND REGEXP_LIKE(target_name,'&1','i')
GROUP BY target_name, target_type, TRUNC(rollup_timestamp,'MONTH'), metric_column
)
GROUP BY target_name, target_type, month_dt
-- ORDER BY target_name, month_dt
)
SELECT target_name, target_type, month_dt, used_gb, allocated_gb,
LAG(used_gb,1) OVER (PARTITION BY target_name ORDER BY target_name) previous_month,
used_gb-LAG(used_gb,1) OVER (PARTITION BY target_name ORDER BY target_name) diff_used_gb
FROM dbsz
ORDER BY target_name, month_dt
;
undefine 1

Next Page »

Blog at WordPress.com.