Thinking Out Loud

January 10, 2022

HOW TO LOAD BALANCE RMAN RAC DATABASE BACKUP

Filed under: awk_sed_grep,RAC,RMAN — mdinh @ 11:49 pm

First, I will share the incorrect method since it is hard coded.

CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT 'sys/passwd@inst1';
CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT 'sys/passwd@inst1';
CONFIGURE CHANNEL 3 DEVICE TYPE DISK CONNECT 'sys/passwd@inst2';
CONFIGURE CHANNEL 4 DEVICE TYPE DISK CONNECT 'sys/passwd@inst2';

The goal is to configure RMAN backup with parallel 4 and load balance.

CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK CONNECT 'sys/***@DB_UNIQUE_NAME';

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jan 10 17:24:15 2022

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DB_NAME (DBID=453022715)

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name DB_UNIQUE_NAME are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/mnt/backups/DB_NAME/%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK CONNECT '*';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+DB_NAME_DATA/DB_UNIQUE_NAME/controlfile/snapcf_DB_NAME.f';

RMAN>

It’s that easy. Changing parallelism will automatically load balance across all nodes.

Here is an example where parallelism is configured and backup is not load balance.

All the channels are allocated to node1.

[oracle@host01 log]$ grep 'channel ORA_DISK_[1-9]: SID' backup_HAWK_level1_202201010300_Sat.log

channel ORA_DISK_1: SID=760 instance=HAWK1 device type=DISK
channel ORA_DISK_2: SID=956 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=1139 instance=HAWK1 device type=DISK
channel ORA_DISK_4: SID=1331 instance=HAWK1 device type=DISK

channel ORA_DISK_1: SID=760 instance=HAWK1 device type=DISK
channel ORA_DISK_2: SID=956 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=1139 instance=HAWK1 device type=DISK
channel ORA_DISK_4: SID=1331 instance=HAWK1 device type=DISK

channel ORA_DISK_1: SID=760 instance=HAWK1 device type=DISK
channel ORA_DISK_2: SID=956 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=1139 instance=HAWK1 device type=DISK
channel ORA_DISK_4: SID=1331 instance=HAWK1 device type=DISK

[oracle@host01 log]$

Here is the correct way and let the database determine the node.

[oracle@host01 log]$ grep 'channel ORA_DISK_[1-9]: SID' backup_HAWK_level1_202201101400_Mon.log

channel ORA_DISK_1: SID=199 instance=HAWK2 device type=DISK
channel ORA_DISK_2: SID=2469 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=196 instance=HAWK1 device type=DISK
channel ORA_DISK_4: SID=1139 instance=HAWK2 device type=DISK

channel ORA_DISK_1: SID=2469 instance=HAWK1 device type=DISK
channel ORA_DISK_2: SID=196 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=1139 instance=HAWK2 device type=DISK
channel ORA_DISK_4: SID=199 instance=HAWK2 device type=DISK

channel ORA_DISK_1: SID=2469 instance=HAWK1 device type=DISK
channel ORA_DISK_2: SID=196 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=1139 instance=HAWK2 device type=DISK
channel ORA_DISK_4: SID=199 instance=HAWK2 device type=DISK

[oracle@host01 log]$

August 14, 2021

Validating RMAN Tape Backup

Filed under: RAC,Recovery,RMAN — mdinh @ 2:12 pm

Lately, I have been validating a lot of database backups to tape.

The validation is made easy because channels are configured in RMAN.

CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' FORMAT '%d_%U' PARMS 'SBT_LIBRARY=/var/opt/oracle/dbaas_acfs/$ORACLE_SID/opc/libopc.so, ENV=(OPC_PFILE=/var/opt/oracle/dbaas_acfs/$ORACLE_SID/opc/opc$ORACLE_SID.ora)' CONNECT '*';

CONFIGURE CHANNEL DEVICE TYPE DISK CONNECT '*';

Here is the generic RMAN script which has been used successfully for all databases so far.

RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS is for all databases (luckily)

[oracle@rac01 dinh]$ rman checksyntax @restore_validate.rman

Recovery Manager: Release 12.2.0.1.0 - Production on Fri Aug 13 11:18:20 2021

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> set echo on
2> spool log to restore_validate.log
3> connect target;
4> show all;
5> restore spfile validate device type=SBT_TAPE;
6> restore controlfile validate device type=SBT_TAPE;
7> restore database until time "SYSDATE" validate device type=SBT_TAPE;
8> restore archivelog from time="SYSDATE-14" validate device type=SBT_TAPE;
9> exit
[oracle@rac01 dinh]$

The database size is 2.4 TB.
There are 1,495 backup pieces.
Restore validate completed in ~1 hr.

--- Verify retention policy.
[oracle@rac01 dinh]$ grep -i "policy" restore_validate.log
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
[oracle@rac01 dinh]$ 

--- Check restore timing.
[oracle@rac01 dinh]$ grep "restore at" restore_validate.log
Starting restore at 2021-AUG-13 11:20:04
Finished restore at 2021-AUG-13 11:20:21
Starting restore at 2021-AUG-13 11:20:21
Finished restore at 2021-AUG-13 11:20:35
Starting restore at 2021-AUG-13 11:20:36
Finished restore at 2021-AUG-13 11:34:01
Starting restore at 2021-AUG-13 11:34:02
Finished restore at 2021-AUG-13 12:13:51
[oracle@rac01 dinh]$

--- Still unclear why archived logs from DISK are scanned.
[oracle@rac01 dinh]$ grep "scanning archived log" restore_validate.log|grep ORA
channel ORA_SBT_TAPE_1: scanning archived log +RECOC1/db_unique_name/ARCHIVELOG/2021_08_13/thread_1_seq_68216.70749.1080469891
channel ORA_SBT_TAPE_4: scanning archived log +RECOC1/db_unique_name/ARCHIVELOG/2021_08_13/thread_1_seq_68217.64086.1080473423
channel ORA_SBT_TAPE_2: scanning archived log +RECOC1/db_unique_name/ARCHIVELOG/2021_08_13/thread_1_seq_68218.10749.1080473497
channel ORA_SBT_TAPE_3: scanning archived log +RECOC1/db_unique_name/ARCHIVELOG/2021_08_13/thread_2_seq_70946.42152.1080473423
channel ORA_SBT_TAPE_2: scanning archived log +RECOC1/db_unique_name/ARCHIVELOG/2021_08_13/thread_2_seq_70947.12624.1080473499
[oracle@rac01 dinh]$

--- Verify All backup pieces are from TAPE.
[oracle@rac01 dinh]$ grep "piece handle" restore_validate.log|grep -v TAPE

--- There are 1495 backup pieces - Needs Improvements.
[oracle@rac01 dinh]$ grep -c "piece handle" restore_validate.log
1495
[oracle@rac01 dinh]$

--- Examples of backup pieces.
[oracle@rac01 dinh]$ grep "piece handle" restore_validate.log|head
channel ORA_SBT_TAPE_1: piece handle=c-2010814236-20210813-16 tag=TAG20210813T103149
channel ORA_SBT_TAPE_1: piece handle=c-2010814236-20210813-16 tag=TAG20210813T103149
channel ORA_SBT_TAPE_1: piece handle=$ORACLE_SID_sa05a48k_1_1 tag=BKP_$ORACLE_SID1_202107310200
channel ORA_SBT_TAPE_3: piece handle=$ORACLE_SID_iu05si7j_1_1 tag=BKP_$ORACLE_SID1_202108070200
channel ORA_SBT_TAPE_1: piece handle=$ORACLE_SID_j105si7k_1_1 tag=BKP_$ORACLE_SID1_202108070200
channel ORA_SBT_TAPE_2: piece handle=$ORACLE_SID_sr05a496_1_1 tag=BKP_$ORACLE_SID1_202107310200
channel ORA_SBT_TAPE_3: piece handle=$ORACLE_SID_j305si7k_1_1 tag=BKP_$ORACLE_SID1_202108070200
channel ORA_SBT_TAPE_1: piece handle=$ORACLE_SID_j205si7k_1_1 tag=BKP_$ORACLE_SID1_202108070200
channel ORA_SBT_TAPE_4: piece handle=$ORACLE_SID_s905a48k_1_1 tag=BKP_$ORACLE_SID1_202107310200
channel ORA_SBT_TAPE_2: piece handle=$ORACLE_SID_j505si7k_1_1 tag=BKP_$ORACLE_SID1_202108070200
[oracle@rac01 dinh]$

[oracle@rac01 dinh]$ grep "piece handle" restore_validate.log|tail
channel ORA_SBT_TAPE_4: piece handle=$ORACLE_SID_5206cvrd_1_1 tag=BKP_$ORACLE_SID1_202108130730
channel ORA_SBT_TAPE_4: piece handle=$ORACLE_SID_5n06d6sm_1_1 tag=BKP_$ORACLE_SID1_202108130930
channel ORA_SBT_TAPE_2: piece handle=$ORACLE_SID_5e06d3bu_1_1 tag=BKP_$ORACLE_SID1_202108130830
channel ORA_SBT_TAPE_3: piece handle=$ORACLE_SID_5d06d3bu_1_1 tag=BKP_$ORACLE_SID1_202108130830
channel ORA_SBT_TAPE_3: piece handle=$ORACLE_SID_6106dac5_1_1 tag=BKP_$ORACLE_SID1_202108131030
channel ORA_SBT_TAPE_1: piece handle=$ORACLE_SID_5c06d3bu_1_1 tag=BKP_$ORACLE_SID1_202108130830
channel ORA_SBT_TAPE_4: piece handle=$ORACLE_SID_5m06d6sm_1_1 tag=BKP_$ORACLE_SID1_202108130930
channel ORA_SBT_TAPE_3: piece handle=$ORACLE_SID_5v06dac5_1_1 tag=BKP_$ORACLE_SID1_202108131030
channel ORA_SBT_TAPE_2: piece handle=$ORACLE_SID_5l06d6sm_1_1 tag=BKP_$ORACLE_SID1_202108130930
channel ORA_SBT_TAPE_1: piece handle=$ORACLE_SID_5u06dac5_1_1 tag=BKP_$ORACLE_SID1_202108131030
[oracle@rac01 dinh]$

July 15, 2020

Create 19c RAC Standby Using RMAN

Filed under: 19c,Dataguard,RAC — mdinh @ 11:55 pm

See RAC_19c_rman_duplicate_standby_same_sid.log

Confirmed!

*** Oracle Data Guard Broker and Static Service Registration (Doc ID 1387859.1)
Note: Static “_DGMGRL” entries are no longer needed as of Oracle Database 12.1.0.2 in Oracle Data Guard Broker configurations
that are managed by Oracle Restart, RAC One Node or RAC as the Broker will use the clusterware to restart an instance.

March 4, 2020

Mining gridSetupActions Log

Filed under: 19c,Grid Infrastructure,RAC,upgrade — mdinh @ 4:22 am

After completing GI upgrade, what’s the most efficient way to mine results?

Upgrade GI to 19.6: typical information provided from terminal

[oracle@ol7-122-rac1 ~]$ /u01/app/19.6.0.0/grid/gridSetup.sh -applyRU /u01/app/oracle/patch/30501910

Preparing the home to patch...

Applying the patch /u01/app/oracle/patch/30501910...
Successfully applied the patch.

The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2020-03-04_00-24-53AM/installerPatchActions_2020-03-04_00-24-53AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/19.6.0.0/grid/install/response/grid_2020-03-04_00-24-53AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2020-03-04_00-24-53AM/gridSetupActions2020-03-04_00-24-53AM.log

[oracle@ol7-122-rac1 ~]$

Example response file from 12.2 install:

[oracle@ol7-122-rac1 response]$ pwd
/u01/app/12.2.0.1/grid/install/response

[oracle@ol7-122-rac1 response]$ sdiff -iEZbWBs -w 150 gridsetup.rsp grid_*.rsp
INVENTORY_LOCATION=                                                       |     INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |     oracle.install.option=CRS_CONFIG
ORACLE_BASE=                                                              |     ORACLE_BASE=/u01/app/oracle
oracle.install.asm.OSDBA=                                                 |     oracle.install.asm.OSDBA=dba
oracle.install.asm.OSASM=                                                 |     oracle.install.asm.OSASM=dba
oracle.install.crs.config.gpnp.scanName=                                  |     oracle.install.crs.config.gpnp.scanName=ol7-122-scan
oracle.install.crs.config.gpnp.scanPort=                                  |     oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterConfiguration=                           |     oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |     oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=                                    |     oracle.install.crs.config.clusterName=ol7-122-cluster
oracle.install.crs.config.gpnp.configureGNS=                              |     oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |     oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=                                 |     oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=                                   |     oracle.install.crs.config.clusterNodes=ol7-122-rac1.localdomain:ol7-12
oracle.install.crs.config.networkInterfaceList=                           |     oracle.install.crs.config.networkInterfaceList=eth1:192.168.56.0:1,eth
oracle.install.asm.configureGIMRDataDG=                                   |     oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.useIPMI=                                        |     oracle.install.crs.config.useIPMI=false
oracle.install.asm.storageOption=                                         |     oracle.install.asm.storageOption=ASM
oracle.install.asmOnNAS.configureGIMRDataDG=                              |     oracle.install.asmOnNAS.configureGIMRDataDG=false
oracle.install.asm.diskGroup.name=                                        |     oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.redundancy=                                  |     oracle.install.asm.diskGroup.redundancy=EXTERNAL
oracle.install.asm.diskGroup.AUSize=                                      |     oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.disksWithFailureGroupNames=                  |     oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/oracleasm
oracle.install.asm.diskGroup.disks=                                       |     oracle.install.asm.diskGroup.disks=/dev/oracleasm/asm-disk3,/dev/oracl
oracle.install.asm.diskGroup.diskDiscoveryString=                         |     oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/*
oracle.install.asm.gimrDG.AUSize=                                         |     oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=                                          |     oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |     oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |     oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |     oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=                                            |     oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=                          |     oracle.install.crs.rootconfig.executeRootScript=false
[oracle@ol7-122-rac1 response]$

Review response file: compare original response file versus the one used for upgrade (grid_2020-03-04_00-24-53AM.rsp)

[oracle@ol7-122-rac1 response]$ pwd
/u01/app/19.6.0.0/grid/install/response

[oracle@ol7-122-rac1 response]$ ls -l
total 76
-rw-r--r--. 1 oracle oinstall 36450 Mar  4 00:38 grid_2020-03-04_00-24-53AM.rsp
-rw-r-----. 1 oracle oinstall 36221 Jan 19  2019 gridsetup.rsp
-rw-r-----. 1 oracle oinstall  1541 May 21  2016 sample.ccf

[oracle@ol7-122-rac1 response]$ sdiff -iEZbWBs -w 150 gridsetup.rsp grid_*.rsp
INVENTORY_LOCATION=                                                       |     INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |     oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |     ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |     oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |     oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |     oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=                                    |     oracle.install.crs.config.clusterName=ol7-122-cluster
oracle.install.crs.config.gpnp.configureGNS=                              |     oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |     oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=                                 |     oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=                                   |     oracle.install.crs.config.clusterNodes=ol7-122-rac2:,ol7-122-rac1:
oracle.install.crs.configureGIMR=                                         |     oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=                                   |     oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=                                  |     oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.useIPMI=                                        |     oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=                                        |     oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.AUSize=                                      |     oracle.install.asm.diskGroup.AUSize=0
oracle.install.asm.gimrDG.AUSize=                                         |     oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=                                          |     oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |     oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |     oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |     oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=                                            |     oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=                          |     oracle.install.crs.rootconfig.executeRootScript=false
[oracle@ol7-122-rac1 response]$

Review log directory:

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ pwd
/u01/app/oraInventory/logs/GridSetupActions2020-03-04_00-24-53AM

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ ls -alrt
total 17988
-rw-r-----.  1 oracle oinstall   11578 Mar  4 00:31 installerPatchActions_2020-03-04_00-24-53AM.log
-rw-r-----.  1 oracle oinstall       0 Mar  4 00:31 gridSetupActions2020-03-04_00-24-53AM.err
drwxrwx---.  3 oracle oinstall      21 Mar  4 00:31 temp_ob
-rw-r-----.  1 oracle oinstall       0 Mar  4 00:38 oraInstall2020-03-04_00-24-53AM.err
-rw-r-----.  1 oracle oinstall     157 Mar  4 00:38 oraInstall2020-03-04_00-24-53AM.out
-rw-r-----.  1 oracle oinstall 9728749 Mar  4 00:39 gridSetupActions2020-03-04_00-24-53AM.out
-rw-r-----.  1 oracle oinstall       0 Mar  4 00:44 oraInstall2020-03-04_00-24-53AM.err.ol7-122-rac2
-rw-r-----.  1 oracle oinstall     142 Mar  4 00:44 oraInstall2020-03-04_00-24-53AM.out.ol7-122-rac2
-rw-r-----.  1 oracle oinstall   29328 Mar  4 02:05 time2020-03-04_00-24-53AM.log
-rw-r-----.  1 oracle oinstall 8624226 Mar  4 02:05 gridSetupActions2020-03-04_00-24-53AM.log
drwxrwx---. 12 oracle oinstall    4096 Mar  4 02:18 ..
drwxrwx---.  3 oracle oinstall    4096 Mar  4 03:20 .

Review .err file: 0 byte is good

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ ls -l *.err
-rw-r-----. 1 oracle oinstall 0 Mar  4 00:31 gridSetupActions2020-03-04_00-24-53AM.err
-rw-r-----. 1 oracle oinstall 0 Mar  4 00:38 oraInstall2020-03-04_00-24-53AM.err

Review grid action: for verification purpose grep log for when grid was configure vs upgrade for comparison

[oracle@ol7-122-rac1 GridSetupActions2020-03-03_01-26-02AM]$ grep -i getInstallOption gridSetupActions*.log
INFO:  [Mar 3, 2020 1:26:05 AM] getInstallOption: CRS_CONFIG
[oracle@ol7-122-rac1 GridSetupActions2020-03-03_01-26-02AM]$

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ grep -i getInstallOption gridSetupActions*.log
INFO:  [Mar 4, 2020 12:32:07 AM] getInstallOption: UPGRADE
[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$

Check for distinct keywords:

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ grep -e '[[:upper:]]: ' gridSetupActions*.log | cut -d ":" -f1 | sort -u
   ACTION
          APPLICATION_ERROR
   CAUSE
INFO
Output
TaskUsersWithSameID
WARNING

Check APPLICATION_ERROR:

[oracle@ol7-122-rac1 GridSetupActions2020-03-04_00-24-53AM]$ grep -B3 -A1 APPLICATION_ERROR gridSetupActions*.log
INFO:  [Mar 4, 2020 12:35:27 AM] INFO: [Task.perform:873]
TaskCheckRPMPackageManager:RPM Package Manager database[TASKCHECKRPMPACKAGEMANAGER]:TASK_SUMMARY:FAILED:INFORMATION:INFORMATION:Total time taken []
          ERRORMSG(GLOBAL): PRVG-11250 : The check "RPM Package Manager database" was not performed because it needs 'root' user privileges.
          APPLICATION_ERROR: NodeResultsUnavailableException thrown when hasNodeResults() returns true
INFO:  [Mar 4, 2020 12:35:27 AM] INFO: [Task.perform:799]

Did you noticed that I used wildcard for the search?

It does not matter since the log for each task will typically be in different directories.

This is the one thing I noticed Oracle did correctly as it’s much easier to the same commands for any environments.

October 22, 2019

srvctl config all

Filed under: 18c,19c,RAC,srvctl — mdinh @ 1:19 pm

Learned something new today and not sure if it’s new feature.

Seems a lot easier to gather clusterware configuration using one command.

Works with srvctl version: 18.0.0.0.0 or higher.

19c

oracle@ol7-19-rac2 ~]$ echo $ORACLE_HOME
/u01/app/19.0.0/grid

[oracle@ol7-19-rac2 ~]$ srvctl -version
srvctl version: 19.0.0.0.0

[oracle@ol7-19-rac2 ~]$ srvctl config all

Oracle Clusterware configuration details
========================================

Oracle Clusterware basic information
------------------------------------
  Operating system          Linux
  Name                      ol7-19-cluster
  Class                     STANDALONE
  Cluster nodes             ol7-19-rac1, ol7-19-rac2
  Version                   19.0.0.0.0
  Groups                    SYSOPER: SYSASM:dba SYSRAC:dba SYSDBA:dba
  OCR locations             +DATA
  Voting disk locations     DATA
  Voting disk file paths    /dev/oracleasm/asm-disk3

Cluster network configuration details
-------------------------------------
  Interface name  Type  Subnet           Classification
  eth1            IPV4  192.168.56.0/24  PUBLIC
  eth2            IPV4  192.168.1.0/24   PRIVATE, ASM

SCAN configuration details
--------------------------

SCAN "ol7-19-scan" details
++++++++++++++++++++++++++
  Name                ol7-19-scan
  IPv4 subnet         192.168.56.0/24
  DHCP server type    static
  End points          TCP:1521

  SCAN listeners
  --------------
  Name              VIP address
  LISTENER_SCAN1    192.168.56.105
  LISTENER_SCAN2    192.168.56.106
  LISTENER_SCAN3    192.168.56.107


ASM configuration details
-------------------------
  Mode             remote
  Password file    +DATA
  SPFILE           +DATA

  ASM disk group details
  ++++++++++++++++++++++
  Name  Redundancy
  DATA  EXTERN

Database configuration details
==============================

Database "ora.cdbrac.db" details
--------------------------------
  Name                ora.cdbrac.db
  Type                RAC
  Version             19.0.0.0.0
  Role                PRIMARY
  Management          AUTOMATIC
  policy
  SPFILE              +DATA
  Password file       +DATA
  Groups              OSDBA:dba OSOPER:oper OSBACKUP:dba OSDG:dba OSKM:dba OSRAC:dba
  Oracle home         /u01/app/oracle/product/19.0.0/dbhome_1
[oracle@ol7-19-rac2 ~]$

18c

[oracle@rac1 Desktop]$ srvctl -version
srvctl version: 18.0.0.0.0

[oracle@rac1 Desktop]$ srvctl config all

Oracle Clusterware configuration details                                        
========================================                                        

Oracle Clusterware basic information                                            
------------------------------------                                            
  Operating system         Linux                                           
  Name                     scan                                            
  Class                    STANDALONE                                      
  Cluster nodes            rac1, rac2                                      
  Version                  18.0.0.0.0                                      
  Groups                   SYSOPER:dba SYSASM:dba SYSRAC:dba SYSDBA:dba    
  Cluster home             /u01/app/18.0.0/grid                            
  OCR locations            +CRS                                            
  Voting disk locations    /dev/asm-disk8, /dev/asm-disk9, /dev/asm-disk7  

Cluster network configuration details                                           
-------------------------------------                                           
  Interface name  Type  Subnet           Classification  
  eth1            IPV4  10.1.1.0/24      PRIVATE, ASM    
  eth0            IPV4  192.168.11.0/24  PUBLIC          

SCAN configuration details                                                      
--------------------------                                                      

SCAN "scan.localdomain" details                                                 
+++++++++++++++++++++++++++++++                                                 
  Name                scan.localdomain  
  IPv4 subnet         192.168.11.0/24   
  DHCP server type    static            
  End points          TCP:1521          

  SCAN listeners                                                                
  --------------                                                                
  Name        VIP address    
  LISTENER    192.168.11.60  


ASM configuration details                                                       
-------------------------                                                       
  Mode             remote  
  Password file    +RAC    
  SPFILE           +RAC    

  ASM disk group details                                                        
  ++++++++++++++++++++++                                                        
  Name  Redundancy  
  CRS   NORMAL      
  DATA  EXTERN      
  FRA   EXTERN      
  RAC   EXTERN      

Database configuration details                                                  
==============================                                                  

Database "ora.uptst.db" details                                                 
-------------------------------                                                 
  Name                ora.uptst.db                                                   
  Type                RAC                                                            
  Version             18.0.0.0.0                                                     
  Role                PRIMARY                                                        
  Management          AUTOMATIC                                                      
  policy                                                                             
  SPFILE              +DATA                                                          
  Password file       +DATA                                                          
  Groups              OSDBA:dba OSOPER:dba OSBACKUP:dba OSDG:dba OSKM:dba OSRAC:dba  
  Oracle home         /u01/app/oracle/product/18.0.0/db_home1                        

Database "ora.uptst2.db" details                                                
--------------------------------                                                
  Name                 ora.uptst2.db                                        
  Type                 RAC                                                  
  Version              12.1.0.2.0                                           
  Role                 PRIMARY                                              
  Management policy    AUTOMATIC                                            
  SPFILE               +DATA                                                
  Password file        +DATA                                                
  Groups               OSDBA:dba OSOPER:dba OSBACKUP:dba OSDG:dba OSKM:dba  
  Oracle home          /u01/app/oracle/product/12.1.0.2_1                   
[oracle@rac1 Desktop]$ 

July 23, 2019

Check Cluster Resources Where Target != State

Filed under: 12.2,RAC — mdinh @ 3:32 pm

Current version.

[oracle@racnode-dc2-1 patch]$ cat /etc/oratab
#Backup file is  /u01/app/12.2.0.1/grid/srvm/admin/oratab.bak.racnode-dc2-1 line added by Agent
-MGMTDB:/u01/app/12.2.0.1/grid:N
hawk1:/u01/app/oracle/12.2.0.1/db1:N
+ASM1:/u01/app/12.2.0.1/grid:N          # line added by Agent
[oracle@racnode-dc2-1 patch]$

Kill database instance process.

[oracle@racnode-dc2-1 patch]$ ps -ef|grep pmon
oracle   13542     1  0 16:09 ?        00:00:00 asm_pmon_+ASM1
oracle   27663     1  0 16:39 ?        00:00:00 ora_pmon_hawk1
oracle   29401 18930  0 16:40 pts/0    00:00:00 grep --color=auto pmon
[oracle@racnode-dc2-1 patch]$
[oracle@racnode-dc2-1 patch]$ kill -9 27663
[oracle@racnode-dc2-1 patch]$

Check cluster resource – close but no cigar (false positive)

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '(TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc2-1            STABLE
               OFFLINE OFFLINE      racnode-dc2-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      3        OFFLINE OFFLINE                               STABLE
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Check cluster resource – BINGO!

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '(TARGET = ONLINE) and (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Another example:

[oracle@racnode-dc2-1 ~]$ crsctl stat res -t -w '(TARGET = ONLINE) and (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
ora.DATA.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
ora.FRA.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 ~]$

Learned something here.

[oracle@racnode-dc2-1 ~]$ crsctl stat res -v -w 'TYPE = ora.database.type'
NAME=ora.hawk.db
TYPE=ora.database.type
LAST_SERVER=racnode-dc2-1
STATE=ONLINE on racnode-dc2-1
TARGET=ONLINE
CARDINALITY_ID=1
OXR_SECTION=0
RESTART_COUNT=0
***** FAILURE_COUNT=1
***** FAILURE_HISTORY=1564015051:racnode-dc2-1
ID=ora.hawk.db 1 1
INCARNATION=4
***** LAST_RESTART=07/25/2019 02:39:38
***** LAST_STATE_CHANGE=07/25/2019 02:39:51
STATE_DETAILS=Open,HOME=/u01/app/oracle/12.2.0.1/db1
INTERNAL_STATE=STABLE
TARGET_SERVER=racnode-dc2-1
RESOURCE_GROUP=
INSTANCE_COUNT=2

LAST_SERVER=racnode-dc2-2
STATE=ONLINE on racnode-dc2-2
TARGET=ONLINE
CARDINALITY_ID=2
OXR_SECTION=0
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.hawk.db 2 1
INCARNATION=1
LAST_RESTART=07/25/2019 02:21:45
LAST_STATE_CHANGE=07/25/2019 02:21:45
STATE_DETAILS=Open,HOME=/u01/app/oracle/12.2.0.1/db1
INTERNAL_STATE=STABLE
TARGET_SERVER=racnode-dc2-2
RESOURCE_GROUP=
INSTANCE_COUNT=2

[oracle@racnode-dc2-1 ~]$

Check cluster resource – sanity check.

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '(TARGET = ONLINE) and (STATE != ONLINE)'
[oracle@racnode-dc2-1 patch]$
[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w 'TYPE = ora.database.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc2-1            Open,HOME=/u01/app/o
                                                             racle/12.2.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc2-2            Open,HOME=/u01/app/o
                                                             racle/12.2.0.1/db1,S
                                                             TABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

June 7, 2019

RAC Installation Logs

Filed under: 12c,RAC — mdinh @ 5:24 pm

Note to self for 2 Nodes RAC installation and DB creation logs location.

Oracle Universal Installer logs for GI/DB:

[oracle@racnode-dc1-1 logs]$ pwd; ls -lhrt
/u01/app/oraInventory/logs
total 2.3M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-13-PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-35-PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.8K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-13-PM.log
-rw-r----- 1 oracle oinstall  153 Jun  7 17:06 oraInstall2019-06-07_04-57-13-PM.out
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
-rw-r----- 1 oracle oinstall   47 Jun  7 17:09 time2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:09 oraInstall2019-06-07_05-09-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:13 oraInstall2019-06-07_05-09-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall   29 Jun  7 17:14 oraInstall2019-06-07_05-09-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:14 AttachHome2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  507 Jun  7 17:14 silentInstall2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  14K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 9.5K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  496 Jun  7 17:15 oraInstall2019-06-07_05-09-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 17:15 installActions2019-06-07_05-09-01PM.log
[oracle@racnode-dc1-1 logs]$

silentInstall*.log

[oracle@racnode-dc1-1 logs]$ grep successful silent*.log

silentInstall2019-06-07_04-39-01PM.log:The installation of Oracle Grid Infrastructure 12c was successful.

silentInstall2019-06-07_05-09-01PM.log:The installation of Oracle Database 12c was successful.

[oracle@racnode-dc1-1 logs

installActions*.log

[oracle@racnode-dc1-1 logs]$ grep "Using paramFile" install*.log

installActions2019-06-07_04-39-01PM.log:INFO: Using paramFile: /u01/stage/12.1.0.2/grid/install/oraparam.ini

installActions2019-06-07_05-09-01PM.log:Using paramFile: /u01/stage/12.1.0.2/database/install/oraparam.ini

[oracle@racnode-dc1-1 logs]$

Run root script after installation:
$GRID_HOME/root.sh

[oracle@racnode-dc1-1 install]$ pwd; ls -lhrt root*.log
/u01/app/12.1.0.2/grid/install
-rw------- 1 oracle oinstall 7.4K Jun  7 16:51 root_racnode-dc1-1_2019-06-07_16-44-37.log
[oracle@racnode-dc1-1 install]$

Run configToolAllCommands:
$GRID_HOME/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/u01/stage/rsp/configtoolallcommands.rsp

[oracle@racnode-dc1-1 oui]$ pwd; ls -lhrt
/u01/app/12.1.0.2/grid/cfgtoollogs/oui
total 1.2M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r--r-- 1 oracle oinstall    0 Jun  7 16:57 configActions2019-06-07_04-57-10-PM.err
-rw-r--r-- 1 oracle oinstall  13K Jun  7 17:06 configActions2019-06-07_04-57-10-PM.log
-rw------- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw------- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
[oracle@racnode-dc1-1 oui]$

dbca

[oracle@racnode-dc1-1 dbca]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca
total 116K
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:02 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM.lck
drwxrwxr-x 3 oracle oinstall 4.0K Jun  7 17:02 _mgmtdb
-rwxrwxr-x 1 oracle oinstall 105K Jun  7 17:03 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM
drwxr-x--- 2 oracle oinstall 4.0K Jun  7 17:23 hawk
[oracle@racnode-dc1-1 dbca]$

dbca _mgmtdb

[oracle@racnode-dc1-1 _mgmtdb]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb
total 19M
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 16:58 trace.log.lck
-rwxrwxr-x 1 oracle oinstall  18M Jun  7 16:59 tempControl.ctl
-rwxrwxr-x 1 oracle oinstall  349 Jun  7 16:59 CloneRmanRestore.log
-rwxrwxr-x 1 oracle oinstall  596 Jun  7 16:59 cloneDBCreation.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:00 rmanUtil
-rwxrwxr-x 1 oracle oinstall 2.1K Jun  7 17:00 plugDatabase.log
-rwxrwxr-x 1 oracle oinstall  428 Jun  7 17:01 dbmssml_catcon_12271.lst
-rwxrwxr-x 1 oracle oinstall 3.5K Jun  7 17:01 dbmssml0.log
-rwxrwxr-x 1 oracle oinstall  396 Jun  7 17:01 postScripts.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:01 lockAccount.log
-rwxrwxr-x 1 oracle oinstall  442 Jun  7 17:01 catbundleapply_catcon_12348.lst
-rwxrwxr-x 1 oracle oinstall 3.9K Jun  7 17:01 catbundleapply0.log
-rwxrwxr-x 1 oracle oinstall  424 Jun  7 17:01 utlrp_catcon_12416.lst
-rwxrwxr-x 1 oracle oinstall 9.2K Jun  7 17:02 utlrp0.log
-rwxrwxr-x 1 oracle oinstall  964 Jun  7 17:02 postDBCreation.log
-rwxrwxr-x 1 oracle oinstall  737 Jun  7 17:02 OraGI12Home1__mgmtdb_creation_checkpoint.xml
-rwxrwxr-x 1 oracle oinstall  877 Jun  7 17:02 _mgmtdb.log
-rwxrwxr-x 1 oracle oinstall 1.1M Jun  7 17:02 trace.log
-rwxrwxr-x 1 oracle oinstall 1.3K Jun  7 17:02 DetectOption.log
drwxrwxr-x 2 oracle oinstall 4.0K Jun  7 17:03 vbox_rac_dc1

[oracle@racnode-dc1-1 _mgmtdb]$ tail _mgmtdb.log
Completing Database Creation
DBCA_PROGRESS : 68%
DBCA_PROGRESS : 79%
DBCA_PROGRESS : 89%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb.
Database Information:
Global Database Name:_mgmtdb
System Identifier(SID):-MGMTDB
[oracle@racnode-dc1-1 _mgmtdb]$

dbca hawk

[oracle@racnode-dc1-1 hawk]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/hawk
total 34M
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 trace.log.lck
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 rmanUtil
-rw-r----- 1 oracle oinstall  18M Jun  7 17:17 tempControl.ctl
-rw-r----- 1 oracle oinstall  384 Jun  7 17:17 CloneRmanRestore.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 17:20 cloneDBCreation.log
-rw-r----- 1 oracle oinstall    8 Jun  7 17:20 postScripts.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:21 CreateClustDBViews.log
-rw-r----- 1 oracle oinstall    6 Jun  7 17:21 lockAccount.log
-rw-r----- 1 oracle oinstall 1.3K Jun  7 17:22 postDBCreation.log
-rw-r----- 1 oracle oinstall  511 Jun  7 17:23 OraDB12Home1_hawk_creation_checkpoint.xml
-rw-r----- 1 oracle oinstall  24K Jun  7 17:23 hawk.log
-rw-r----- 1 oracle oinstall  16M Jun  7 17:23 trace.log

[oracle@racnode-dc1-1 hawk]$ tail hawk.log
DBCA_PROGRESS : 73%
DBCA_PROGRESS : 76%
DBCA_PROGRESS : 85%
DBCA_PROGRESS : 94%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/hawk.
Database Information:
Global Database Name:hawk
System Identifier(SID) Prefix:hawk
[oracle@racnode-dc1-1 hawk]$

May 19, 2019

Shocking opatchauto resume works after auto-logout

Filed under: 12c,opatchauto,RAC — mdinh @ 5:36 pm

WARNING: Please don’t try this at home or in production environment.

With that being said, patching was for DR production.

Oracle Interim Patch Installer version 12.2.0.1.16

Patching 2 nodes RAC cluster and node1 completed successfully.

Rationale for using -norestart because there was an issue at one time where datapatch was applied on the node1.

Don’t implement Active Data Guard and have database Start options: mount

# crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.dbproddr.db
      2        ONLINE  INTERMEDIATE node2              Mounted (Closed),STABLE
ora.dbproddr.dbdr.svc
      2        ONLINE  OFFLINE                                          STABLE
--------------------------------------------------------------------------------

$ srvctl status database -d dbproddr -v
Instance dbproddr1 is running on node node1 with online services dbdr. Instance status: Open,Readonly.
Instance dbproddr2 is running on node node2. Instance status: Mounted (Closed).

Run opatchauto and ctrl-c from session is stuck.

node2 ~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019

node2 ~ # $GRID_HOME/OPatch/opatchauto apply $PATCH_TOP_DIR/28833531 -norestart

OPatchauto session is initiated at Thu May 16 20:20:24 2019

System initialization log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-05-16_08-20-26PM.log.

Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_08-20-47PM.log
The id for this session is K43Y

Executing OPatch prereq operations to verify patch applicability on home /u02/app/12.1.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.1.0/db
Patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db

Patch applicability verified successfully on home /u02/app/12.1.0/grid


Verifying SQL patch applicability on home /u01/app/oracle/product/12.1.0/db
"/bin/sh -c 'cd /u01/app/oracle/product/12.1.0/db; ORACLE_HOME=/u01/app/oracle/product/12.1.0/db ORACLE_SID=dbproddr2 /u01/app/oracle/product/12.1.0/db/OPatch/datapatch -prereq -verbose'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command.

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db


Preparing to bring down database service on home /u01/app/oracle/product/12.1.0/db
Successfully prepared home /u01/app/oracle/product/12.1.0/db to bring down database service


Bringing down CRS service on home /u02/app/12.1.0/grid
Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log
CRS service brought down successfully on home /u02/app/12.1.0/grid


Performing prepatch operation on home /u01/app/oracle/product/12.1.0/db
Perpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db


Start applying binary patch on home /u01/app/oracle/product/12.1.0/db
Binary patch applied successfully on home /u01/app/oracle/product/12.1.0/db


Performing postpatch operation on home /u01/app/oracle/product/12.1.0/db
Postpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db


Start applying binary patch on home /u02/app/12.1.0/grid

Binary patch applied successfully on home /u02/app/12.1.0/grid


Starting CRS service on home /u02/app/12.1.0/grid





*** Ctrl-C as shown below ***
^C
OPatchauto session completed at Thu May 16 21:41:58 2019
*** Time taken to complete the session 81 minutes, 34 seconds ***

opatchauto failed with error code 130

This is not good as session disconnected while troubleshooting in another session.

node2 ~ # timed out waiting for input: auto-logout

Even though opatchauto session was terminated cluster upgrade state is [NORMAL] vs cluster upgrade state is [ROLLING PATCH]

node2 ~ # crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [323461694].

node2 ~ # crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
node2 ~ # crsctl stat res -t -w 'TYPE = ora.database.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.dbproddr.db
      1        ONLINE  ONLINE       node1              Open,Readonly,STABLE
      2        ONLINE  ONLINE       node2              Open,Readonly,STABLE
--------------------------------------------------------------------------------

At this point, I was not sure what to do since everything looked good and online.

Colleague helping me with troubleshooting stated patch completed successfully and the main question if we need to try “opatchauto resume”

However, I was not comfortable with the outcome and tried opatchauto resume and it worked like magic.

Reconnect and opatchauto resume

mdinh@node2 ~ $ sudo su - 
~ # . /home/oracle/working/dinh/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM4
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u02/app/12.1.0/grid
ORACLE_HOME=/u02/app/12.1.0/grid
Oracle Instance alive for sid "+ASM4"
~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019/
~ # $GRID_HOME/OPatch/opatchauto resume

OPatchauto session is initiated at Thu May 16 22:03:09 2019
Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_10-03-10PM.log
Resuming existing session with id K43Y

Starting CRS service on home /u02/app/12.1.0/grid
Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log
CRS service started successfully on home /u02/app/12.1.0/grid


Preparing home /u01/app/oracle/product/12.1.0/db after database service restarted

OPatchauto is running in norestart mode. PDB instances will not be checked for database on the current node.
No step execution required.........
 

Trying to apply SQL patch on home /u01/app/oracle/product/12.1.0/db
SQL patch applied successfully on home /u01/app/oracle/product/12.1.0/db

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:node2
RAC Home:/u01/app/oracle/product/12.1.0/db
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/patches/Jan2019/28833531/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/patches/Jan2019/28833531/28729220
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /u01/software/patches/Jan2019/28833531/28729213
Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28731800
Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log


Host:node2
CRS Home:/u02/app/12.1.0/grid
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/patches/Jan2019/28833531/26983807
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY applied:

Patch: /u01/software/patches/Jan2019/28833531/28729213
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28729220
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28731800
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log


Patching session reported following warning(s): 
_________________________________________________

[WARNING] The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.

[WARNING] The database instances will not be brought up under the 'norestart' option. The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.


OPatchauto session completed at Thu May 16 22:10:01 2019
Time taken to complete the session 6 minutes, 52 seconds
~ # 

Logs:

oracle@node2:/u02/app/12.1.0/grid/cfgtoollogs/crsconfig
> ls -alrt
total 508
drwxr-x--- 2 oracle oinstall   4096 Nov 23 02:15 oracle
-rwxrwxr-x 1 oracle oinstall 167579 Nov 23 02:15 rootcrs_node2_2018-11-23_02-07-58AM.log
drwxrwxr-x 9 oracle oinstall   4096 Apr 10 12:05 ..

opatchauto apply - Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  33020 May 16 20:22 crspatch_node2_2019-05-16_08-21-16PM.log
====================================================================================================

Mysterious log file - Unknown where this log is from because it was not from my terminal output.
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  86983 May 16 21:42 crspatch_node2_2019-05-16_08-27-35PM.log
====================================================================================================

-rwxrwxr-x 1 oracle oinstall  56540 May 16 22:06 srvmcfg1.log
-rwxrwxr-x 1 oracle oinstall  26836 May 16 22:06 srvmcfg2.log
-rwxrwxr-x 1 oracle oinstall  21059 May 16 22:06 srvmcfg3.log
-rwxrwxr-x 1 oracle oinstall  23032 May 16 22:08 srvmcfg4.log

opatchauto resume - Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  64381 May 16 22:09 crspatch_node2_2019-05-16_10-03-17PM.log
====================================================================================================

Prepatch operation log file.

> tail -20 crspatch_node2_2019-05-16_08-21-16PM.log
2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS
2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS '
2019-05-16 20:22:04: Removing file /tmp/fileTChFoS
2019-05-16 20:22:04: Successfully removed file: /tmp/fileTChFoS
2019-05-16 20:22:04: pipe exit code: 0
2019-05-16 20:22:04: /bin/su successfully executed

2019-05-16 20:22:04: checkpoint ROOTCRS_POSTPATCH_OOP_REQSTEPS does not exist
2019-05-16 20:22:04: Done - Performing pre-pathching steps required for GI stack
2019-05-16 20:22:04: Resetting cluutil_trc_suff_pp to 0
2019-05-16 20:22:04: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS"
2019-05-16 20:22:04: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil0.log
2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS
2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS '
2019-05-16 20:22:04: Removing file /tmp/fileDoYyQA
2019-05-16 20:22:04: Successfully removed file: /tmp/fileDoYyQA
2019-05-16 20:22:04: pipe exit code: 0
2019-05-16 20:22:04: /bin/su successfully executed

*** 2019-05-16 20:22:04: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:SUCCESS ***

Mysterious log file – crspatch_node2_2019-05-16_08-27-35PM.log

2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL
2019-05-16 21:42:00: ###### Begin DIE Stack Trace ######
2019-05-16 21:42:00:     Package         File                 Line Calling   
2019-05-16 21:42:00:     --------------- -------------------- ---- ----------
2019-05-16 21:42:00:  1: main            rootcrs.pl            267 crsutils::dietrap
2019-05-16 21:42:00:  2: crsutils        crsutils.pm          1631 main::__ANON__
2019-05-16 21:42:00:  3: crsutils        crsutils.pm          1586 crsutils::system_cmd_capture_noprint
2019-05-16 21:42:00:  4: crsutils        crsutils.pm          9098 crsutils::system_cmd_capture
2019-05-16 21:42:00:  5: crspatch        crspatch.pm           988 crsutils::startFullStack
2019-05-16 21:42:00:  6: crspatch        crspatch.pm          1121 crspatch::performPostPatch
2019-05-16 21:42:00:  7: crspatch        crspatch.pm           212 crspatch::crsPostPatch
2019-05-16 21:42:00:  8: main            rootcrs.pl            276 crspatch::new
2019-05-16 21:42:00: ####### End DIE Stack Trace #######

2019-05-16 21:42:00: ROOTCRS_POSTPATCH checkpoint has failed
2019-05-16 21:42:00:      ckpt: -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil4.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH '
2019-05-16 21:42:00: Removing file /tmp/filewniUim
2019-05-16 21:42:00: Successfully removed file: /tmp/filewniUim
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil5.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status '
2019-05-16 21:42:00: Removing file /tmp/fileK1Tyw6
2019-05-16 21:42:00: Successfully removed file: /tmp/fileK1Tyw6
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: The 'ROOTCRS_POSTPATCH' status is FAILED
2019-05-16 21:42:00: ROOTCRS_POSTPATCH state is FAIL
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil6.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL '
2019-05-16 21:42:00: Removing file /tmp/filej20epR
2019-05-16 21:42:00: Successfully removed file: /tmp/filej20epR
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:FAIL
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL '
2019-05-16 21:42:01: Removing file /tmp/filely834C
2019-05-16 21:42:01: Successfully removed file: /tmp/filely834C
2019-05-16 21:42:01: pipe exit code: 0
2019-05-16 21:42:01: /bin/su successfully executed

*** 2019-05-16 21:42:01: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL ***

Postpatch operation log file.

> tail -20 crspatch_node2_2019-05-16_10-03-17PM.log
2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START"
2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log
2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START
2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START '
2019-05-16 22:09:59: Removing file /tmp/file0IogVl
2019-05-16 22:09:59: Successfully removed file: /tmp/file0IogVl
2019-05-16 22:09:59: pipe exit code: 0
2019-05-16 22:09:59: /bin/su successfully executed

2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:START
2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS"
2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil8.log
2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS
2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS '
2019-05-16 22:09:59: Removing file /tmp/fileXDCkuM
2019-05-16 22:09:59: Successfully removed file: /tmp/fileXDCkuM
2019-05-16 22:09:59: pipe exit code: 0
2019-05-16 22:09:59: /bin/su successfully executed

*** 2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:SUCCESS ***

Happy patching and hopefully patching primary to come will be seamlessly successful.

May 7, 2019

Remove GRID Home After Upgrade

Filed under: 12c,Grid Infrastructure,RAC — mdinh @ 9:53 pm

The environment started with a GRID 12.1.0.1 installation, upgraded to 18.3.0.0, and performed out-of-place patching (OOP) to 18.6.0.0.

As a result, there are three GRID homes and remove 12.1.0.1.

This demonstration will be for the last node from the cluster; however, the action performed will be the same for all nodes.

Review existing patch for Grid and Database homes:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/lspatches.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Notice that the GRID home is /u01/18.3.0.0/grid_2 because this was the suggestion from OOP process.
Based on experience, it might be better to name GRID home with the correct version, i.e. /u01/18.6.0.0/grid

Verify cluster state is [NORMAL]:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/crs_Query.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2056778364].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2056778364] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 29301631 29301643 29302264 ] have been applied on the local node. The release patch string is [18.6.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2056778364].
+ exit
[oracle@racnode-dc1-1 ~]$

Check Oracle Inventory:

[oracle@racnode-dc1-2 ~]$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>

### GRID home (/u01/app/12.1.0.1/grid) to be removed.
========================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
========================================================================================

<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove GRID home (/u01/app/12.1.0.1/grid). Use -local flag to avoid any bug issues.

[oracle@racnode-dc1-2 ~]$ export ORACLE_HOME=/u01/app/12.1.0.1/grid
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16040 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
[oracle@racnode-dc1-2 ~]$

Verify GRID home was removed:

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>

### GRID home (/u01/app/12.1.0.1/grid) removed.
================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1" REMOVED="T"/>
================================================================================

</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove 12.1.0.1 directory:

[oracle@racnode-dc1-2 ~]$ sudo su -
Last login: Thu May  2 23:38:22 CEST 2019
[root@racnode-dc1-2 ~]# cd /u01/app/
[root@racnode-dc1-2 app]# ll
total 12
drwxr-xr-x  3 root   oinstall 4096 Apr 17 23:36 12.1.0.1
drwxrwxr-x 12 oracle oinstall 4096 Apr 30 18:05 oracle
drwxrwx---  5 oracle oinstall 4096 May  2 23:54 oraInventory
[root@racnode-dc1-2 app]# rm -rf 12.1.0.1/
[root@racnode-dc1-2 app]#

Check the cluster:

[root@racnode-dc1-2 app]# logout
[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racnode-dc1-2 ~]$

Later, /u01/18.3.0.0/grid will be removed, too, if there are no issues with the most recent patch.

May 5, 2019

What’s My Cluster Configuration

Filed under: 18c,Grid Infrastructure,RAC — mdinh @ 2:15 pm
[grid@ol7-183-node1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[grid@ol7-183-node1 ~]$ crsctl get cluster configuration
Name                : ol7-183-cluster
Configuration       : Cluster
Class               : Standalone Cluster
Type                : flex
The cluster is not extended.
--------------------------------------------------------------------------------
        MEMBER CLUSTER INFORMATION

      Name       Version        GUID                       Deployed Deconfigured
================================================================================
================================================================================

[grid@ol7-183-node1 ~]$ olsnodes -s -a -t
ol7-183-node1   Active  Hub     Unpinned
ol7-183-node2   Active  Hub     Unpinned

[grid@ol7-183-node1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [70732493] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28090564 28256701 ] have been applied on the local node. The release patch string is [18.3.0.0.0].

[grid@ol7-183-node1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [70732493].
[grid@ol7-183-node1 ~]$
Next Page »

Create a free website or blog at WordPress.com.