Thinking Out Loud

June 15, 2019

Shell Scripting Using set -v

Filed under: shell scripting — mdinh @ 1:22 pm

set -v : Print shell input lines as they are read.

show_gds_status.sh

#!/bin/sh
##############################
# GDSCTL> configure -width 132
# GDSCTL> configure -save_config
##############################
. ~/gsm1.sh
set -evx
gdsctl -show << END
status
databases
services
exit
END
exit

Excute show_gds_status.sh

[oracle@SLC02PNY GDS]$ ./show_gds_status.sh
gdsctl -show << END
status
databases
services
exit
END
+ gdsctl -show
gsm       : GSM1
TNS_ADMIN : /u01/app/oracle/product/18.0.0/gsmhome_1/network/admin
driver    : jdbc:oracle:thin:
resolve   : QUAL_HOSTNAME
timeout   : 150
log_level : OFF
version   : 18.0.0.0.0
width     : 132
verbose   : ON
spool     : OFF
showtime  : OFF
GDSCTL: Version 18.0.0.0.0 - Production on Sat Jun 15 13:01:21 UTC 2019

Copyright (c) 2011, 2018, Oracle.  All rights reserved.

Welcome to GDSCTL, type "help" for information.

Current GSM is set to GSM1
GDSCTL>
Alias                     GSM1
Version                   18.0.0.0.0
Start Date                15-JUN-2019 12:22:28
Trace Level               off
Listener Log File         /u01/app/oracle/diag/gsm/SLC02PNY/gsm1/alert/log.xml
Listener Trace File       /u01/app/oracle/diag/gsm/SLC02PNY/gsm1/trace/ora_9504_140547635764096.trc
Endpoint summary          (ADDRESS=(HOST=SLC02PNY.localdomain)(PORT=1571)(PROTOCOL=tcp))
GSMOCI Version            2.2.1
Mastership                Y
Connected to GDS catalog  Y
Process Id                9507
Number of reconnections   0
Pending tasks.     Total  0
Tasks in  process. Total  0
Regional Mastership       TRUE
Total messages published  152
Time Zone                 +00:00
Orphaned Buddy Regions:
     None
GDS region                region1
Network metrics:
   Region: region2 Network factor:0

GDSCTL>
Database: "chi" Registered: Y State: Ok ONS: N. Role: PH_STNDBY Instances: 1 Region: region2
   Service: "prim" Globally started: Y Started: N
            Scan: N Enabled: Y Preferred: Y
   Service: "stby" Globally started: Y Started: Y
            Scan: N Enabled: Y Preferred: Y
   Registered instances:
     sales%11
Database: "sfo" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: region1
   Service: "prim" Globally started: Y Started: Y
            Scan: N Enabled: Y Preferred: Y
   Service: "stby" Globally started: Y Started: N
            Scan: N Enabled: Y Preferred: Y
   Registered instances:
     sales%1

GDSCTL>
Service "prim.sales.oradbcloud" has 1 instance(s). Affinity: ANYWHERE
   Instance "sales%1", name: "sales", db: "sfo", region: "region1", status: ready.
Service "stby.sales.oradbcloud" has 1 instance(s). Affinity: ANYWHERE
   Instance "sales%11", name: "sales", db: "chi", region: "region2", status: ready.

GDSCTL>
exit
+ exit
[oracle@SLC02PNY GDS]$

help set

[oracle@SLC02PNY GDS]$ help set
set: set [-abefhkmnptuvxBCHP] [-o option-name] [--] [arg ...]
    Set or unset values of shell options and positional parameters.

    Change the value of shell attributes and positional parameters, or
    display the names and values of shell variables.

    Options:
      -a  Mark variables which are modified or created for export.
      -b  Notify of job termination immediately.
      -e  Exit immediately if a command exits with a non-zero status.
      -f  Disable file name generation (globbing).
      -h  Remember the location of commands as they are looked up.
      -k  All assignment arguments are placed in the environment for a
          command, not just those that precede the command name.
      -m  Job control is enabled.
      -n  Read commands but do not execute them.
      -o option-name
          Set the variable corresponding to option-name:
              allexport    same as -a
              braceexpand  same as -B
              emacs        use an emacs-style line editing interface
              errexit      same as -e
              errtrace     same as -E
              functrace    same as -T
              hashall      same as -h
              histexpand   same as -H
              history      enable command history
              ignoreeof    the shell will not exit upon reading EOF
              interactive-comments
                           allow comments to appear in interactive commands
              keyword      same as -k
              monitor      same as -m
              noclobber    same as -C
              noexec       same as -n
              noglob       same as -f
              nolog        currently accepted but ignored
              notify       same as -b
              nounset      same as -u
              onecmd       same as -t
              physical     same as -P
              pipefail     the return value of a pipeline is the status of
                           the last command to exit with a non-zero status,
                           or zero if no command exited with a non-zero status
              posix        change the behavior of bash where the default
                           operation differs from the Posix standard to
                           match the standard
              privileged   same as -p
              verbose      same as -v
              vi           use a vi-style line editing interface
              xtrace       same as -x
      -p  Turned on whenever the real and effective user ids do not match.
          Disables processing of the $ENV file and importing of shell
          functions.  Turning this option off causes the effective uid and
          gid to be set to the real uid and gid.
      -t  Exit after reading and executing one command.
      -u  Treat unset variables as an error when substituting.
================================================================================
      -v  Print shell input lines as they are read.
================================================================================
      -x  Print commands and their arguments as they are executed.
      -B  the shell will perform brace expansion
      -C  If set, disallow existing regular files to be overwritten
          by redirection of output.
      -E  If set, the ERR trap is inherited by shell functions.
      -H  Enable ! style history substitution.  This flag is on
          by default when the shell is interactive.
      -P  If set, do not follow symbolic links when executing commands
          such as cd which change the current directory.
      -T  If set, the DEBUG trap is inherited by shell functions.
      --  Assign any remaining arguments to the positional parameters.
          If there are no remaining arguments, the positional parameters
          are unset.
      -   Assign any remaining arguments to the positional parameters.
          The -x and -v options are turned off.

    Using + rather than - causes these flags to be turned off.  The
    flags can also be used upon invocation of the shell.  The current
    set of flags may be found in $-.  The remaining n ARGs are positional
    parameters and are assigned, in order, to $1, $2, .. $n.  If no
    ARGs are given, all shell variables are printed.

    Exit Status:
    Returns success unless an invalid option is given.
[oracle@SLC02PNY GDS]$
Advertisements

June 7, 2019

RAC Installation Logs

Filed under: 12c,RAC — mdinh @ 5:24 pm

Note to self for 2 Nodes RAC installation and DB creation logs location.

Oracle Universal Installer logs for GI/DB:

[oracle@racnode-dc1-1 logs]$ pwd; ls -lhrt
/u01/app/oraInventory/logs
total 2.3M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-13-PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-35-PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.8K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-13-PM.log
-rw-r----- 1 oracle oinstall  153 Jun  7 17:06 oraInstall2019-06-07_04-57-13-PM.out
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
-rw-r----- 1 oracle oinstall   47 Jun  7 17:09 time2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:09 oraInstall2019-06-07_05-09-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:13 oraInstall2019-06-07_05-09-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall   29 Jun  7 17:14 oraInstall2019-06-07_05-09-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:14 AttachHome2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  507 Jun  7 17:14 silentInstall2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  14K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 9.5K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  496 Jun  7 17:15 oraInstall2019-06-07_05-09-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 17:15 installActions2019-06-07_05-09-01PM.log
[oracle@racnode-dc1-1 logs]$

silentInstall*.log

[oracle@racnode-dc1-1 logs]$ grep successful silent*.log

silentInstall2019-06-07_04-39-01PM.log:The installation of Oracle Grid Infrastructure 12c was successful.

silentInstall2019-06-07_05-09-01PM.log:The installation of Oracle Database 12c was successful.

[oracle@racnode-dc1-1 logs

installActions*.log

[oracle@racnode-dc1-1 logs]$ grep "Using paramFile" install*.log

installActions2019-06-07_04-39-01PM.log:INFO: Using paramFile: /u01/stage/12.1.0.2/grid/install/oraparam.ini

installActions2019-06-07_05-09-01PM.log:Using paramFile: /u01/stage/12.1.0.2/database/install/oraparam.ini

[oracle@racnode-dc1-1 logs]$

Run root script after installation:
$GRID_HOME/root.sh

[oracle@racnode-dc1-1 install]$ pwd; ls -lhrt root*.log
/u01/app/12.1.0.2/grid/install
-rw------- 1 oracle oinstall 7.4K Jun  7 16:51 root_racnode-dc1-1_2019-06-07_16-44-37.log
[oracle@racnode-dc1-1 install]$

Run configToolAllCommands:
$GRID_HOME/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/u01/stage/rsp/configtoolallcommands.rsp

[oracle@racnode-dc1-1 oui]$ pwd; ls -lhrt
/u01/app/12.1.0.2/grid/cfgtoollogs/oui
total 1.2M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r--r-- 1 oracle oinstall    0 Jun  7 16:57 configActions2019-06-07_04-57-10-PM.err
-rw-r--r-- 1 oracle oinstall  13K Jun  7 17:06 configActions2019-06-07_04-57-10-PM.log
-rw------- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw------- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
[oracle@racnode-dc1-1 oui]$

dbca

[oracle@racnode-dc1-1 dbca]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca
total 116K
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:02 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM.lck
drwxrwxr-x 3 oracle oinstall 4.0K Jun  7 17:02 _mgmtdb
-rwxrwxr-x 1 oracle oinstall 105K Jun  7 17:03 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM
drwxr-x--- 2 oracle oinstall 4.0K Jun  7 17:23 hawk
[oracle@racnode-dc1-1 dbca]$

dbca _mgmtdb

[oracle@racnode-dc1-1 _mgmtdb]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb
total 19M
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 16:58 trace.log.lck
-rwxrwxr-x 1 oracle oinstall  18M Jun  7 16:59 tempControl.ctl
-rwxrwxr-x 1 oracle oinstall  349 Jun  7 16:59 CloneRmanRestore.log
-rwxrwxr-x 1 oracle oinstall  596 Jun  7 16:59 cloneDBCreation.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:00 rmanUtil
-rwxrwxr-x 1 oracle oinstall 2.1K Jun  7 17:00 plugDatabase.log
-rwxrwxr-x 1 oracle oinstall  428 Jun  7 17:01 dbmssml_catcon_12271.lst
-rwxrwxr-x 1 oracle oinstall 3.5K Jun  7 17:01 dbmssml0.log
-rwxrwxr-x 1 oracle oinstall  396 Jun  7 17:01 postScripts.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:01 lockAccount.log
-rwxrwxr-x 1 oracle oinstall  442 Jun  7 17:01 catbundleapply_catcon_12348.lst
-rwxrwxr-x 1 oracle oinstall 3.9K Jun  7 17:01 catbundleapply0.log
-rwxrwxr-x 1 oracle oinstall  424 Jun  7 17:01 utlrp_catcon_12416.lst
-rwxrwxr-x 1 oracle oinstall 9.2K Jun  7 17:02 utlrp0.log
-rwxrwxr-x 1 oracle oinstall  964 Jun  7 17:02 postDBCreation.log
-rwxrwxr-x 1 oracle oinstall  737 Jun  7 17:02 OraGI12Home1__mgmtdb_creation_checkpoint.xml
-rwxrwxr-x 1 oracle oinstall  877 Jun  7 17:02 _mgmtdb.log
-rwxrwxr-x 1 oracle oinstall 1.1M Jun  7 17:02 trace.log
-rwxrwxr-x 1 oracle oinstall 1.3K Jun  7 17:02 DetectOption.log
drwxrwxr-x 2 oracle oinstall 4.0K Jun  7 17:03 vbox_rac_dc1

[oracle@racnode-dc1-1 _mgmtdb]$ tail _mgmtdb.log
Completing Database Creation
DBCA_PROGRESS : 68%
DBCA_PROGRESS : 79%
DBCA_PROGRESS : 89%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb.
Database Information:
Global Database Name:_mgmtdb
System Identifier(SID):-MGMTDB
[oracle@racnode-dc1-1 _mgmtdb]$

dbca hawk

[oracle@racnode-dc1-1 hawk]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/hawk
total 34M
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 trace.log.lck
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 rmanUtil
-rw-r----- 1 oracle oinstall  18M Jun  7 17:17 tempControl.ctl
-rw-r----- 1 oracle oinstall  384 Jun  7 17:17 CloneRmanRestore.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 17:20 cloneDBCreation.log
-rw-r----- 1 oracle oinstall    8 Jun  7 17:20 postScripts.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:21 CreateClustDBViews.log
-rw-r----- 1 oracle oinstall    6 Jun  7 17:21 lockAccount.log
-rw-r----- 1 oracle oinstall 1.3K Jun  7 17:22 postDBCreation.log
-rw-r----- 1 oracle oinstall  511 Jun  7 17:23 OraDB12Home1_hawk_creation_checkpoint.xml
-rw-r----- 1 oracle oinstall  24K Jun  7 17:23 hawk.log
-rw-r----- 1 oracle oinstall  16M Jun  7 17:23 trace.log

[oracle@racnode-dc1-1 hawk]$ tail hawk.log
DBCA_PROGRESS : 73%
DBCA_PROGRESS : 76%
DBCA_PROGRESS : 85%
DBCA_PROGRESS : 94%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/hawk.
Database Information:
Global Database Name:hawk
System Identifier(SID) Prefix:hawk
[oracle@racnode-dc1-1 hawk]$

June 6, 2019

Local Install rlwrap for OEL 7.6

Filed under: linux,OEL7 — mdinh @ 12:13 am

Installing rlwrap 7.6, requires python34 local install

yum install rlwrap

[root@SLC02PNY ~]# yum install rlwrap
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package rlwrap.x86_64 0:0.43-2.el7 will be installed
--> Processing Dependency: perl(Data::Dumper) for package: rlwrap-0.43-2.el7.x86_64
--> Processing Dependency: /usr/bin/python3.4 for package: rlwrap-0.43-2.el7.x86_64

****************************************************************************************************
Package python34 is obsoleted by python36, but obsoleting package does not provide for requirements
****************************************************************************************************

--> Running transaction check
---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed
---> Package rlwrap.x86_64 0:0.43-2.el7 will be installed
--> Processing Dependency: /usr/bin/python3.4 for package: rlwrap-0.43-2.el7.x86_64
Package python34 is obsoleted by python36, but obsoleting package does not provide for requirements
--> Processing Dependency: /usr/bin/python3.4 for package: rlwrap-0.43-2.el7.x86_64
Package python34 is obsoleted by python36, but obsoleting package does not provide for requirements
--> Finished Dependency Resolution

yum install python34

root@SLC02PNY ~]# yum install python34
Loaded plugins: ulninfo

****************************************************************************************************
Package python34 is obsoleted by python36, trying to install python36-3.6.8-1.el7.x86_64 instead
****************************************************************************************************

Resolving Dependencies
--> Running transaction check
---> Package python36.x86_64 0:3.6.8-1.el7 will be installed
--> Processing Dependency: python36-libs(x86-64) = 3.6.8-1.el7 for package: python36-3.6.8-1.el7.x86_64
--> Processing Dependency: libpython3.6m.so.1.0()(64bit) for package: python36-3.6.8-1.el7.x86_64
--> Running transaction check
---> Package python36-libs.x86_64 0:3.6.8-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                         Arch                     Version                        Repository                            Size
====================================================================================================================================
Installing:
 python36                        x86_64                   3.6.8-1.el7                    ol7_developer_EPEL                    66 k
Installing for dependencies:
 python36-libs                   x86_64                   3.6.8-1.el7                    ol7_developer_EPEL                   8.6 M

Transaction Summary
====================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 8.6 M
Installed size: 36 M
Is this ok [y/d/N]: n

cat /etc/system-release

[root@ADC6160274 ~]# cat /etc/system-release
Oracle Linux Server release 7.6
[root@ADC6160274 ~]#

yumdownloader python34-3.4.5-4.el7.x86_64

[root@ADC6160274 ~]# yumdownloader python34-3.4.5-4.el7.x86_64
python34-3.4.5-4.el7.x86_64.rpm                                                                              |  50 kB  00:00:00

yumdownloader python34-libs-3.4.5-4.el7.x86_64
[root@ADC6160274 ~]# yumdownloader python34-libs-3.4.5-4.el7.x86_64
python34-libs-3.4.5-4.el7.x86_64.rpm                                                                         | 8.2 MB  00:00:01

yum localinstall python34-libs-3.4.5-4.el7.x86_64.rpm python34-3.4.5-4.el7.x86_64.rpm

[root@ADC6160274 ~]# yum localinstall python34-libs-3.4.5-4.el7.x86_64.rpm python34-3.4.5-4.el7.x86_64.rpm
Loaded plugins: ulninfo
Examining python34-libs-3.4.5-4.el7.x86_64.rpm: python34-libs-3.4.5-4.el7.x86_64
Marking python34-libs-3.4.5-4.el7.x86_64.rpm to be installed
Examining python34-3.4.5-4.el7.x86_64.rpm: python34-3.4.5-4.el7.x86_64
Marking python34-3.4.5-4.el7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package python34.x86_64 0:3.4.5-4.el7 will be installed
---> Package python34-libs.x86_64 0:3.4.5-4.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                     Arch                 Version                     Repository                                       Size
====================================================================================================================================
Installing:
 python34                    x86_64               3.4.5-4.el7                 /python34-3.4.5-4.el7.x86_64                     36 k
 python34-libs               x86_64               3.4.5-4.el7                 /python34-libs-3.4.5-4.el7.x86_64                29 M

Transaction Summary
====================================================================================================================================
Install  2 Packages

Total size: 29 M
Installed size: 29 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : python34-libs-3.4.5-4.el7.x86_64                                                                                 1/2
  Installing : python34-3.4.5-4.el7.x86_64                                                                                      2/2
  Verifying  : python34-3.4.5-4.el7.x86_64                                                                                      1/2
  Verifying  : python34-libs-3.4.5-4.el7.x86_64                                                                                 2/2

Installed:
  python34.x86_64 0:3.4.5-4.el7                                  python34-libs.x86_64 0:3.4.5-4.el7

Complete!

yum install rlwrap

[root@ADC6160274 ~]# yum install rlwrap
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package rlwrap.x86_64 0:0.43-2.el7 will be installed
--> Processing Dependency: perl(Data::Dumper) for package: rlwrap-0.43-2.el7.x86_64
--> Running transaction check
---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                           Arch                    Version                        Repository                           Size
====================================================================================================================================
Installing:
 rlwrap                            x86_64                  0.43-2.el7                     ol7_developer_EPEL                  118 k
Installing for dependencies:
 perl-Data-Dumper                  x86_64                  2.145-3.el7                    ol7_latest                           47 k

Transaction Summary
====================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 165 k
Installed size: 378 k
Is this ok [y/d/N]: y
Downloading packages:
(1/2): perl-Data-Dumper-2.145-3.el7.x86_64.rpm                                                               |  47 kB  00:00:00
(2/2): rlwrap-0.43-2.el7.x86_64.rpm                                                                          | 118 kB  00:00:00
------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                               311 kB/s | 165 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : perl-Data-Dumper-2.145-3.el7.x86_64                                                                              1/2
  Installing : rlwrap-0.43-2.el7.x86_64                                                                                         2/2
  Verifying  : perl-Data-Dumper-2.145-3.el7.x86_64                                                                              1/2
  Verifying  : rlwrap-0.43-2.el7.x86_64                                                                                         2/2

Installed:
  rlwrap.x86_64 0:0.43-2.el7

Dependency Installed:
  perl-Data-Dumper.x86_64 0:2.145-3.el7

Complete!
[root@ADC6160274 ~]#

May 27, 2019

Choiceology with Katy Milkman

Filed under: Uncategorized — mdinh @ 8:15 pm

Good listening I thought about sharing with you.

RSS Feed:
Choiceology with Katy Milkman – Exposing the psychological traps that lead to expensive mistakes

You can listen and subscribe to Choiceology with Katy Milkman for free in any podcast player—such as Apple Podcasts, Google Podcasts or Spotify.
How Do I Listen to Choiceology?

May 19, 2019

Shocking opatchauto resume works after auto-logout

Filed under: 12c,opatchauto,RAC — mdinh @ 5:36 pm

WARNING: Please don’t try this at home or in production environment.

With that being said, patching was for DR production.

Oracle Interim Patch Installer version 12.2.0.1.16

Patching 2 nodes RAC cluster and node1 completed successfully.

Rationale for using -norestart because there was an issue at one time where datapatch was applied on the node1.

Don’t implement Active Data Guard and have database Start options: mount

# crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.dbproddr.db
      2        ONLINE  INTERMEDIATE node2              Mounted (Closed),STABLE
ora.dbproddr.dbdr.svc
      2        ONLINE  OFFLINE                                          STABLE
--------------------------------------------------------------------------------

$ srvctl status database -d dbproddr -v
Instance dbproddr1 is running on node node1 with online services dbdr. Instance status: Open,Readonly.
Instance dbproddr2 is running on node node2. Instance status: Mounted (Closed).

Run opatchauto and ctrl-c from session is stuck.

node2 ~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019

node2 ~ # $GRID_HOME/OPatch/opatchauto apply $PATCH_TOP_DIR/28833531 -norestart

OPatchauto session is initiated at Thu May 16 20:20:24 2019

System initialization log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-05-16_08-20-26PM.log.

Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_08-20-47PM.log
The id for this session is K43Y

Executing OPatch prereq operations to verify patch applicability on home /u02/app/12.1.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.1.0/db
Patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db

Patch applicability verified successfully on home /u02/app/12.1.0/grid


Verifying SQL patch applicability on home /u01/app/oracle/product/12.1.0/db
"/bin/sh -c 'cd /u01/app/oracle/product/12.1.0/db; ORACLE_HOME=/u01/app/oracle/product/12.1.0/db ORACLE_SID=dbproddr2 /u01/app/oracle/product/12.1.0/db/OPatch/datapatch -prereq -verbose'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command.

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db


Preparing to bring down database service on home /u01/app/oracle/product/12.1.0/db
Successfully prepared home /u01/app/oracle/product/12.1.0/db to bring down database service


Bringing down CRS service on home /u02/app/12.1.0/grid
Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log
CRS service brought down successfully on home /u02/app/12.1.0/grid


Performing prepatch operation on home /u01/app/oracle/product/12.1.0/db
Perpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db


Start applying binary patch on home /u01/app/oracle/product/12.1.0/db
Binary patch applied successfully on home /u01/app/oracle/product/12.1.0/db


Performing postpatch operation on home /u01/app/oracle/product/12.1.0/db
Postpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db


Start applying binary patch on home /u02/app/12.1.0/grid

Binary patch applied successfully on home /u02/app/12.1.0/grid


Starting CRS service on home /u02/app/12.1.0/grid





*** Ctrl-C as shown below ***
^C
OPatchauto session completed at Thu May 16 21:41:58 2019
*** Time taken to complete the session 81 minutes, 34 seconds ***

opatchauto failed with error code 130

This is not good as session disconnected while troubleshooting in another session.

node2 ~ # timed out waiting for input: auto-logout

Even though opatchauto session was terminated cluster upgrade state is [NORMAL] vs cluster upgrade state is [ROLLING PATCH]

node2 ~ # crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [323461694].

node2 ~ # crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
node2 ~ # crsctl stat res -t -w 'TYPE = ora.database.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.dbproddr.db
      1        ONLINE  ONLINE       node1              Open,Readonly,STABLE
      2        ONLINE  ONLINE       node2              Open,Readonly,STABLE
--------------------------------------------------------------------------------

At this point, I was not sure what to do since everything looked good and online.

Colleague helping me with troubleshooting stated patch completed successfully and the main question if we need to try “opatchauto resume”

However, I was not comfortable with the outcome and tried opatchauto resume and it worked like magic.

Reconnect and opatchauto resume

mdinh@node2 ~ $ sudo su - 
~ # . /home/oracle/working/dinh/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM4
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u02/app/12.1.0/grid
ORACLE_HOME=/u02/app/12.1.0/grid
Oracle Instance alive for sid "+ASM4"
~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019/
~ # $GRID_HOME/OPatch/opatchauto resume

OPatchauto session is initiated at Thu May 16 22:03:09 2019
Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_10-03-10PM.log
Resuming existing session with id K43Y

Starting CRS service on home /u02/app/12.1.0/grid
Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log
CRS service started successfully on home /u02/app/12.1.0/grid


Preparing home /u01/app/oracle/product/12.1.0/db after database service restarted

OPatchauto is running in norestart mode. PDB instances will not be checked for database on the current node.
No step execution required.........
 

Trying to apply SQL patch on home /u01/app/oracle/product/12.1.0/db
SQL patch applied successfully on home /u01/app/oracle/product/12.1.0/db

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:node2
RAC Home:/u01/app/oracle/product/12.1.0/db
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/patches/Jan2019/28833531/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/patches/Jan2019/28833531/28729220
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /u01/software/patches/Jan2019/28833531/28729213
Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28731800
Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log


Host:node2
CRS Home:/u02/app/12.1.0/grid
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/patches/Jan2019/28833531/26983807
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY applied:

Patch: /u01/software/patches/Jan2019/28833531/28729213
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28729220
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28731800
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log


Patching session reported following warning(s): 
_________________________________________________

[WARNING] The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.

[WARNING] The database instances will not be brought up under the 'norestart' option. The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.


OPatchauto session completed at Thu May 16 22:10:01 2019
Time taken to complete the session 6 minutes, 52 seconds
~ # 

Logs:

oracle@node2:/u02/app/12.1.0/grid/cfgtoollogs/crsconfig
> ls -alrt
total 508
drwxr-x--- 2 oracle oinstall   4096 Nov 23 02:15 oracle
-rwxrwxr-x 1 oracle oinstall 167579 Nov 23 02:15 rootcrs_node2_2018-11-23_02-07-58AM.log
drwxrwxr-x 9 oracle oinstall   4096 Apr 10 12:05 ..

opatchauto apply - Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  33020 May 16 20:22 crspatch_node2_2019-05-16_08-21-16PM.log
====================================================================================================

Mysterious log file - Unknown where this log is from because it was not from my terminal output.
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  86983 May 16 21:42 crspatch_node2_2019-05-16_08-27-35PM.log
====================================================================================================

-rwxrwxr-x 1 oracle oinstall  56540 May 16 22:06 srvmcfg1.log
-rwxrwxr-x 1 oracle oinstall  26836 May 16 22:06 srvmcfg2.log
-rwxrwxr-x 1 oracle oinstall  21059 May 16 22:06 srvmcfg3.log
-rwxrwxr-x 1 oracle oinstall  23032 May 16 22:08 srvmcfg4.log

opatchauto resume - Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  64381 May 16 22:09 crspatch_node2_2019-05-16_10-03-17PM.log
====================================================================================================

Prepatch operation log file.

> tail -20 crspatch_node2_2019-05-16_08-21-16PM.log
2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS
2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS '
2019-05-16 20:22:04: Removing file /tmp/fileTChFoS
2019-05-16 20:22:04: Successfully removed file: /tmp/fileTChFoS
2019-05-16 20:22:04: pipe exit code: 0
2019-05-16 20:22:04: /bin/su successfully executed

2019-05-16 20:22:04: checkpoint ROOTCRS_POSTPATCH_OOP_REQSTEPS does not exist
2019-05-16 20:22:04: Done - Performing pre-pathching steps required for GI stack
2019-05-16 20:22:04: Resetting cluutil_trc_suff_pp to 0
2019-05-16 20:22:04: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS"
2019-05-16 20:22:04: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil0.log
2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS
2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS '
2019-05-16 20:22:04: Removing file /tmp/fileDoYyQA
2019-05-16 20:22:04: Successfully removed file: /tmp/fileDoYyQA
2019-05-16 20:22:04: pipe exit code: 0
2019-05-16 20:22:04: /bin/su successfully executed

*** 2019-05-16 20:22:04: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:SUCCESS ***

Mysterious log file – crspatch_node2_2019-05-16_08-27-35PM.log

2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL
2019-05-16 21:42:00: ###### Begin DIE Stack Trace ######
2019-05-16 21:42:00:     Package         File                 Line Calling   
2019-05-16 21:42:00:     --------------- -------------------- ---- ----------
2019-05-16 21:42:00:  1: main            rootcrs.pl            267 crsutils::dietrap
2019-05-16 21:42:00:  2: crsutils        crsutils.pm          1631 main::__ANON__
2019-05-16 21:42:00:  3: crsutils        crsutils.pm          1586 crsutils::system_cmd_capture_noprint
2019-05-16 21:42:00:  4: crsutils        crsutils.pm          9098 crsutils::system_cmd_capture
2019-05-16 21:42:00:  5: crspatch        crspatch.pm           988 crsutils::startFullStack
2019-05-16 21:42:00:  6: crspatch        crspatch.pm          1121 crspatch::performPostPatch
2019-05-16 21:42:00:  7: crspatch        crspatch.pm           212 crspatch::crsPostPatch
2019-05-16 21:42:00:  8: main            rootcrs.pl            276 crspatch::new
2019-05-16 21:42:00: ####### End DIE Stack Trace #######

2019-05-16 21:42:00: ROOTCRS_POSTPATCH checkpoint has failed
2019-05-16 21:42:00:      ckpt: -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil4.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH '
2019-05-16 21:42:00: Removing file /tmp/filewniUim
2019-05-16 21:42:00: Successfully removed file: /tmp/filewniUim
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil5.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status '
2019-05-16 21:42:00: Removing file /tmp/fileK1Tyw6
2019-05-16 21:42:00: Successfully removed file: /tmp/fileK1Tyw6
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: The 'ROOTCRS_POSTPATCH' status is FAILED
2019-05-16 21:42:00: ROOTCRS_POSTPATCH state is FAIL
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil6.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL '
2019-05-16 21:42:00: Removing file /tmp/filej20epR
2019-05-16 21:42:00: Successfully removed file: /tmp/filej20epR
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:FAIL
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL '
2019-05-16 21:42:01: Removing file /tmp/filely834C
2019-05-16 21:42:01: Successfully removed file: /tmp/filely834C
2019-05-16 21:42:01: pipe exit code: 0
2019-05-16 21:42:01: /bin/su successfully executed

*** 2019-05-16 21:42:01: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL ***

Postpatch operation log file.

> tail -20 crspatch_node2_2019-05-16_10-03-17PM.log
2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START"
2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log
2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START
2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START '
2019-05-16 22:09:59: Removing file /tmp/file0IogVl
2019-05-16 22:09:59: Successfully removed file: /tmp/file0IogVl
2019-05-16 22:09:59: pipe exit code: 0
2019-05-16 22:09:59: /bin/su successfully executed

2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:START
2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS"
2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil8.log
2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS
2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS '
2019-05-16 22:09:59: Removing file /tmp/fileXDCkuM
2019-05-16 22:09:59: Successfully removed file: /tmp/fileXDCkuM
2019-05-16 22:09:59: pipe exit code: 0
2019-05-16 22:09:59: /bin/su successfully executed

*** 2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:SUCCESS ***

Happy patching and hopefully patching primary to come will be seamlessly successful.

May 12, 2019

EM13.3 Directory Structures

Filed under: Grid Control — mdinh @ 2:41 pm

Currently, I am preparing POC to migrate OMS 13.3 from OEL6 to OEL7 and wanted a high level overview of the installation.

[oracle@MGOEM ~]$ cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

export PATH=$PATH:$HOME/bin
export DISPLAY=127.0.0.1:10.0

export ORACLE_BASE=/u01/app/oracle
export AGENT_BASE=$ORACLE_BASE/agent

export AGENT_HOME=$AGENT_BASE/agent_13.3.0.0.0
export EM_INSTANCE_BASE=$ORACLE_BASE/gc_inst
export OMS_INSTANCE_BASE=$EM_INSTANCE_BASE
export OHS=$EM_INSTANCE_BASE/user_projects/domains/GCDomain/servers/ohs1

### Starting from 13cR1, Oracle home (or OMS home) refers to the Middleware home.
export ORACLE_HOME=$ORACLE_BASE/middleware
export MW_HOME=$ORACLE_HOME
export OMS_HOME=$ORACLE_HOME
[oracle@MGOEM ~]$

Overview of the Directories Created for OMS Installation.
The OMS instance base directory (typically, gc_inst) is maintained outside the middleware home

[oracle@MGOEM ~]$ cd $MW_HOME; pwd; ls
/u01/app/oracle/middleware
allroot.sh   common               embip          ldap           OMSPatcher     plsql                root.sh     ucp
asr          create_header.log    gccompliance   lib            OPatch         plugins              slax        user_projects
bi           crs                  has            logs           oracle_common  plugins_common       soa         webgate
bin          css                  install        network        oracore        postjava_header.log  sqlplus     wlserver
bmp          disc                 instantclient  nls            oraInst.loc    precomp              srvm        xdk
cfgtoollogs  doc                  inventory      ocm            ord            rdbms                stage
clone        domain-registry.xml  jdbc           ohs            oui            relnotes             sysman
coherence    em                   jlib           omscarespfile  perl           response             thirdparty
[oracle@MGOEM middleware]$

Overview of the Directories Created for Management Agent Installation (Central Agent).
Agent base directory for the central agent (Management Agent installed with the OMS).

[oracle@MGOEM middleware]$ cd $AGENT_BASE; pwd; ls
/u01/app/oracle/agent
agent_13.3.0.0.0  agent_inst  agentInstall.rsp
[oracle@MGOEM agent]$

Agent home that is within the agent base directory.

[oracle@MGOEM agent]$ cd $AGENT_HOME; pwd; ls
/u01/app/oracle/agent/agent_13.3.0.0.0
agent.rsp    EMStage        jdbc  jythonLib  OPatch         perl     replacebins.sh           sbin    xsds
bin          install        jdk   ldap       oracle_common  plugins  replacebins.sh.template  stage
cfgtoollogs  instantclient  jlib  lib        oraInst.loc    prereqs  root.sh                  sysman
config       inventory      js    ocm        oui            rda      root.sh.template         ucp
[oracle@MGOEM agent_13.3.0.0.0]$

The OMS instance base directory (typically, gc_inst) is maintained outside the middleware home.

[oracle@MGOEM agent_13.3.0.0.0]$ cd $OMS_INSTANCE_BASE; pwd; ls
/u01/app/oracle/gc_inst
em  user_projects
[oracle@MGOEM gc_inst]$

ORACLE_BASE

[oracle@MGOEM gc_inst]$  cd $ORACLE_BASE; pwd; ls
/u01/app/oracle
agent  bip  gc_inst  middleware  swlib
[oracle@MGOEM oracle]$

Inventory and Patches:

[oracle@MGOEM ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2015, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>13.8.0.0.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="oms13c1" LOC="/u01/app/oracle/middleware" TYPE="O" IDX="1"/>
<HOME NAME="agent13c1" LOC="/u01/app/oracle/agent/agent_13.3.0.0.0" TYPE="O" IDX="2"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@MGOEM ~]$

[oracle@MGOEM ~]$ $AGENT_HOME/OPatch/opatch lspatches
27839641;One-off
27369653;One-off
27244723;One-off
27074880;OPSS Bundle Patch 12.1.3.0.171124
26933408;One-off
25832897;One-off
25412962;
23519804;One-off
20882747;One-off
20442348;One-off
19982906;One-off
19345252;One-off
18814458;One-off
28042003;One-off
27419391;WLS PATCH SET UPDATE 12.1.3.0.180417
23527146;One-off
20741228;JDBC 12.1.3.1 BP1

OPatch succeeded.
[oracle@MGOEM ~]$

[oracle@MGOEM ~]$ $ORACLE_HOME/OPatch/opatch lspatches
27839641;One-off
27369653;One-off
27244723;One-off
27074880;OPSS Bundle Patch 12.1.3.0.171124
26933408;One-off
25832897;One-off
25412962;
23519804;One-off
20882747;One-off
20442348;One-off
19982906;One-off
19345252;One-off
18814458;One-off
28042003;One-off
27419391;WLS PATCH SET UPDATE 12.1.3.0.180417
23527146;One-off
20741228;JDBC 12.1.3.1 BP1

OPatch succeeded.
[oracle@MGOEM ~]$

[oracle@MGOEM ~]$ $ORACLE_HOME/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 13.8.0.0.0
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/middleware
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/middleware/oraInst.loc
OPatch version    : 13.8.0.0.0
OUI version       : 13.8.0.0.0
Log file location : /u01/app/oracle/middleware/cfgtoollogs/opatch/opatch2019-05-12_16-34-38PM_1.log


OPatch detects the Middleware Home as "/u01/app/oracle/middleware"

Lsinventory Output file location : /u01/app/oracle/middleware/cfgtoollogs/opatch/lsinv/lsinventory2019-05-12_16-34-38PM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: MGOEM
ARU platform id: 226
ARU platform description:: Linux_AMD64

[oracle@MGOEM ~]$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
[oracle@MGOEM ~]$

[oracle@MGOEM ~]$ cat /u01/app/oracle/middleware/oraInst.loc
#Oracle Installer Location File Location
#Fri May 10 16:53:18 CEST 2019
inst_group=oinstall
inventory_loc=/u01/app/oraInventory
[oracle@MGOEM ~]$

Reference:
DIRECTORY STRUCTURE AND LOCATIONS OF IMPORTANT TRACE AND LOG FILES OF ENTERPRISE MANAGER CLOUD CONTROL 13C

Overview of the Directories Created for an Enterprise Manager System

May 11, 2019

Create Mount Filesystem for Vagrant VirtualBox

Filed under: Uncategorized — mdinh @ 2:41 pm

Once again, I am using oravirt boxes.

If you just want to create the machine, and not run the provisioning step run this:

vagrant up

Since I don’t know ansible, it was much simpler to do the work manually.

Oracle Linux Server release 7.3

Review disks:

[root@MGOEM ~]# fdisk -l /dev/sd*

Disk /dev/sda: 52.4 GB, 52428800000 bytes, 102400000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000979b6

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   102399999    50150400   8e  Linux LVM

Disk /dev/sda1: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda2: 51.4 GB, 51354009600 bytes, 100300800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

### Disk is not partitioned since there are no # for device /dev/sdb and no Filesystem
Disk /dev/sdb: 187.9 GB, 187904819200 bytes, 367001600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@MGOEM ~]#

Create partition:

[root@MGOEM ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x37a8a8de.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-367001599, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-367001599, default 367001599):
Using default value 367001599
Partition 1 of type Linux and of size 175 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@MGOEM ~]#

Review disk: Linux System

[root@MGOEM ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 187.9 GB, 187904819200 bytes, 367001600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x37a8a8de

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048   367001599   183499776   83  Linux
[root@MGOEM ~]#

Create Filesystem:

[root@MGOEM ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
11468800 inodes, 45874944 blocks
2293747 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2193620992
1400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[root@MGOEM ~]#

Create and mount /u01:

[root@MGOEM ~]# mkdir -p /u01
[root@MGOEM ~]# mount /dev/sdb1 /u01
[root@MGOEM ~]#
[root@MGOEM ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
devtmpfs              2.8G     0  2.8G   0% /dev
tmpfs                 2.8G     0  2.8G   0% /dev/shm
tmpfs                 2.8G  8.4M  2.8G   1% /run
tmpfs                 2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root    46G  2.1G   44G   5% /
/dev/sda1            1014M  167M  848M  17% /boot
vagrant               932G  283G  650G  31% /vagrant
sf_working            420G  139G  281G  33% /sf_working
media_patch           3.7T  513G  3.2T  14% /media/patch
media_swrepo          3.7T  513G  3.2T  14% /media/swrepo
sf_OracleSoftware     3.7T  513G  3.2T  14% /sf_OracleSoftware
media_shared_storage  932G  283G  650G  31% /media/shared_storage
tmpfs                 571M     0  571M   0% /run/user/1000
/dev/sdb1             173G   61M  164G   1% /u01
[root@MGOEM ~]#

Update /etc/fstab:

[root@MGOEM ~]# tail /etc/fstab
# Created by anaconda on Tue Apr 18 08:50:14 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol-root     /                       xfs     defaults        0 0
UUID=ed2996e5-e077-4e23-83a5-10418226a725 /boot                   xfs     defaults        0 0
/dev/mapper/ol-swap     swap                    swap    defaults        0 0
/swapfile1              swap                    swap    defaults        0 0
/dev/sdb1               /u01                    ext4    defaults        1 1
[root@MGOEM ~]#

May 7, 2019

Remove GRID Home After Upgrade

Filed under: 12c,Grid Infrastructure,RAC — mdinh @ 9:53 pm

The environment started with a GRID 12.1.0.1 installation, upgraded to 18.3.0.0, and performed out-of-place patching (OOP) to 18.6.0.0.

As a result, there are three GRID homes and remove 12.1.0.1.

This demonstration will be for the last node from the cluster; however, the action performed will be the same for all nodes.

Review existing patch for Grid and Database homes:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/lspatches.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Notice that the GRID home is /u01/18.3.0.0/grid_2 because this was the suggestion from OOP process.
Based on experience, it might be better to name GRID home with the correct version, i.e. /u01/18.6.0.0/grid

Verify cluster state is [NORMAL]:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/crs_Query.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2056778364].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2056778364] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 29301631 29301643 29302264 ] have been applied on the local node. The release patch string is [18.6.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2056778364].
+ exit
[oracle@racnode-dc1-1 ~]$

Check Oracle Inventory:

[oracle@racnode-dc1-2 ~]$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>

### GRID home (/u01/app/12.1.0.1/grid) to be removed.
========================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
========================================================================================

<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove GRID home (/u01/app/12.1.0.1/grid). Use -local flag to avoid any bug issues.

[oracle@racnode-dc1-2 ~]$ export ORACLE_HOME=/u01/app/12.1.0.1/grid
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16040 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
[oracle@racnode-dc1-2 ~]$

Verify GRID home was removed:

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>

### GRID home (/u01/app/12.1.0.1/grid) removed.
================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1" REMOVED="T"/>
================================================================================

</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove 12.1.0.1 directory:

[oracle@racnode-dc1-2 ~]$ sudo su -
Last login: Thu May  2 23:38:22 CEST 2019
[root@racnode-dc1-2 ~]# cd /u01/app/
[root@racnode-dc1-2 app]# ll
total 12
drwxr-xr-x  3 root   oinstall 4096 Apr 17 23:36 12.1.0.1
drwxrwxr-x 12 oracle oinstall 4096 Apr 30 18:05 oracle
drwxrwx---  5 oracle oinstall 4096 May  2 23:54 oraInventory
[root@racnode-dc1-2 app]# rm -rf 12.1.0.1/
[root@racnode-dc1-2 app]#

Check the cluster:

[root@racnode-dc1-2 app]# logout
[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racnode-dc1-2 ~]$

Later, /u01/18.3.0.0/grid will be removed, too, if there are no issues with the most recent patch.

May 6, 2019

Notes: Troubleshooting GRID opatchauto and Sample GRID OOP Log

Filed under: opatchauto — mdinh @ 9:44 pm

Troubleshooting opatchauto Issues in Grid Infrastructure Environment (Doc ID 2467266.1)

Failure during prepatch execution
If any failure in pre-patch step, verify the logs under <oracle base>/crsdata/<hostname>/crsconfig/crspatch_<hostname>_<timestamp>.log

Failure during patching execution
If any failure in execution during patching, review the opatch execution logs under corresponding <oracle home>/cfgtoollogs/opatchauto location

Failure during post-patch execution
If any failure in post-patch execution, review the logs under <oracle base>/crsdata/<hostname>/crsconfig/crspatch_<hostname>_<timestamp>.log.

Generally, the error seen While Starting the Clusterware. In that situation, troubleshoot Grid Infrastructure issues referring Doc ID 1050908.1

How to debug opatchauto failures?
# export OPATCH_DEBUG=true
# opatchauto apply patch with location -loglevel FINEST

ACTUAL: Grid_Infrastructure_Out_of_Place_12.2 (GI/DB SAME version)

[root@racnode-dc2-1 ~]# export PATCH_HOME=/u01/stage/patch/Apr2019/29301687
[root@racnode-dc2-1 ~]# $GRID_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.17

[root@racnode-dc2-1 ~]# $GRID_HOME/OPatch/opatchauto apply $PATCH_HOME -prepare-clone -logLevel FINEST

System initialization log file is /u01/app/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-04-29_05-24-36PM.log.
Session log file is /u01/app/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-04-29_05-26-26PM.log
Prepatch operation log file location: /u01/app/oracle/crsdata/racnode-dc2-1/crsconfig/crspatch_racnode-dc2-1_2019-04-29_05-33-30PM.log

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc2-1
RAC Home:/u01/app/oracle/12.2.0.1/db1
Version:12.2.0.1.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/stage/patch/Apr2019/29301687/29301676
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/stage/patch/Apr2019/29301687/26839277
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/stage/patch/Apr2019/29301687/28566910
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /u01/stage/patch/Apr2019/29301687/29314339
Log: /u01/app/oracle/12.2.0.1/db1_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-54PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29314424
Log: /u01/app/oracle/12.2.0.1/db1_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-54PM_1.log


Host:racnode-dc2-1
CRS Home:/u01/app/12.2.0.1/grid
Version:12.2.0.1.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/stage/patch/Apr2019/29301687/26839277
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/28566910
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29301676
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29314339
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29314424
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log


Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc2-1
Actual Home : /u01/app/oracle/12.2.0.1/db1
Version:12.2.0.1.0
Clone Home Path : /u01/app/oracle/12.2.0.1/db1_2

Host : racnode-dc2-1
Actual Home : /u01/app/12.2.0.1/grid
Version:12.2.0.1.0
Clone Home Path : /u01/app/12.2.0.1/grid_2


OPatchauto session completed at Mon Apr 29 18:03:48 2019
Time taken to complete the session 39 minutes, 16 seconds
[root@racnode-dc2-1 ~]#

ACTUAL: Grid_Infrastructure_Out_of_Place_18.6 (GI/DB DIFFERENT version)

+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

[root@racnode-dc1-1 ~]# export PATCH_HOME=/u01/patch/Apr2019/29301682
[root@racnode-dc1-1 ~]# $GRID_HOME/OPatch/opatchauto apply $PATCH_HOME -prepare-clone -logLevel FINEST

System initialization log file is /u01/18.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-04-30_05-06-34PM.log.
Session log file is /u01/18.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2019-04-30_05-08-04PM.log

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-1
CRS Home:/u01/18.3.0.0/grid
Version:18.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/patch/Apr2019/29301682/28435192
Reason: This patch is already been applied, so not going to apply again.

Patch: /u01/patch/Apr2019/29301682/28547619
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY applied:

Patch: /u01/patch/Apr2019/29301682/29301631
Log: /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-30_17-17-53PM_1.log

Patch: /u01/patch/Apr2019/29301682/29301643
Log: /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-30_17-17-53PM_1.log

Patch: /u01/patch/Apr2019/29301682/29302264
Log: /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-30_17-17-53PM_1.log


Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-1
Actual Home : /u01/18.3.0.0/grid
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid_2


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1

OPatchauto session completed at Tue Apr 30 17:27:21 2019
Time taken to complete the session 20 minutes, 52 seconds
[root@racnode-dc1-1 ~]#

May 5, 2019

What’s My Cluster Configuration

Filed under: 18c,Grid Infrastructure,RAC — mdinh @ 2:15 pm
[grid@ol7-183-node1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[grid@ol7-183-node1 ~]$ crsctl get cluster configuration
Name                : ol7-183-cluster
Configuration       : Cluster
Class               : Standalone Cluster
Type                : flex
The cluster is not extended.
--------------------------------------------------------------------------------
        MEMBER CLUSTER INFORMATION

      Name       Version        GUID                       Deployed Deconfigured
================================================================================
================================================================================

[grid@ol7-183-node1 ~]$ olsnodes -s -a -t
ol7-183-node1   Active  Hub     Unpinned
ol7-183-node2   Active  Hub     Unpinned

[grid@ol7-183-node1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [70732493] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28090564 28256701 ] have been applied on the local node. The release patch string is [18.3.0.0.0].

[grid@ol7-183-node1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [70732493].
[grid@ol7-183-node1 ~]$
Next Page »

Blog at WordPress.com.