Thinking Out Loud

March 2, 2022

Validating RMAN Backup For Restore

Filed under: awk_sed_grep,RMAN — mdinh @ 11:01 pm

Backup is only good if it can be used to restore.

Lately, I have been performing a lot of RMAN backup and validation.

In summary:

Restore validate completed in 0:24:17 (h:m:s) 
comprising of 39 ARCH, 1 LEVEL0, 3 LEVEL1, and 2 TAG20220302T121110 (control files backup).
--- The only reason I am providing host info is because grep -A does not work!
Host: AIX dbhost01 1 7 00C7DE504B00

--- RMAN restore script:
restore_validate.rman
spool log to restore_validate.log
set echo on
connect target;
show all;
restore spfile validate;
restore controlfile validate;
restore database until time "SYSDATE" check logical validate;
restore archivelog from time "SYSDATE-1" check logical validate;
report schema;
exit

--- RMAN configuration:
Recovery Manager: Release 11.2.0.4.0 - Production on Wed Mar 2 15:59:34 2022

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: BANANA (DBID=2937483440)

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name BANANA are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE SBT_TAPE PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/opt/dpsapps/dbappagent/lib/lib64/libddboostora.so,SBT_PARMS=(CONFIG_FILE=/home/oracle/idpa_ddbea.config)';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/product/11.2.0/11.2.0.4/dbs/snapcf_BANANA.f'; # default

RMAN>

--- Run RMAN restore_validate:
oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ export NLS_DATE_FORMAT='YYYY-MON-DD HH24:MI:SS'

oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ nohup rman @restore_validate.rman > /tmp/restore_validate.rman_$ORACLE_SID.log 2>&1 &
[1] 2359590

oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ jobs
[1]+  Running                 nohup rman @restore_validate.rman > /tmp/restore_validate.rman_$ORACLE_SID.log 2>&1 &

oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $
[1]+  Done                    nohup rman @restore_validate.rman > /tmp/restore_validate.rman_$ORACLE_SID.log 2>&1

--- Check policy:
oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ grep -i "policy" restore_validate.log
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS;
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';

--- Check restore timing:
oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ grep "restore at" restore_validate.log
Starting restore at 2022-MAR-02 14:57:58
Finished restore at 2022-MAR-02 14:58:01
Starting restore at 2022-MAR-02 14:58:01
Finished restore at 2022-MAR-02 14:58:04
Starting restore at 2022-MAR-02 14:58:04
Finished restore at 2022-MAR-02 15:21:11
Starting restore at 2022-MAR-02 15:21:11
Finished restore at 2022-MAR-02 15:22:15

--- Check number of backup piece:
oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ grep -c "piece handle" restore_validate.log
45

--- Backup tag=LEVEL0, tag=LEVEL1, tag=ARCH
--- Check number of backup piece type:
oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ awk -F 'tag=' '{print $2}' restore_validate.log|sort|uniq -c
 275
  39 ARCH
   1 LEVEL0
   3 LEVEL1
   2 TAG20220302T121110

oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ grep -c TAG20220302T121110 restore_validate.log
2

oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ grep -c 'LEVEL0$' restore_validate.log
1

oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ grep -c 'LEVEL1$' restore_validate.log
3

oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ grep -c 'ARCH$' restore_validate.log
39

--- If not using end with ('ARCH$'), will have incorrect results.
grep 'ARCH' restore_validate.log|head
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';

--- This is a replacement for grep -A which is not available for AIX.
oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $ awk 'c-->0;$0~s{if(b)for(c=b+1;c>1;c--)print r[(NR-c+1)%b];print;c=a}b{r[NR%b]=$0}' b=0 a=30 s="schema for database with db_unique_name" restore_validate.log

Report of database schema for database with db_unique_name BANANA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    2565     SYSTEM               ***     /oradata/BANANA/datafile/system_01.dbf
2    4667     SYSAUX               ***     /oradata/BANANA/datafile/sysaux_01.dbf
3    1300     UNDOTBS1             ***     /oradata/BANANA/datafile/undotbs1_01.dbf
4    50       EMSPROD_TS           ***     /oradata/BANANA/datafile/emsprod_ts_01.dbf
5    1650     MODPROD_TS           ***     /oradata/BANANA/datafile/modprod_ts_01.dbf
6    2039     AVAIL                ***     /oradata/BANANA/datafile/avail_01.dbf
7    32767    PROD001_TS           ***     /oradata/BANANA/datafile/PROD001_ts_05.dbf
8    2548     AUDIT_TBS            ***     /oradata/BANANA/datafile/audit_tbs_01.dbf
9    512      USERS                ***     /oradata/BANANA/datafile/users_01.dbf
10   30720    PROD001_TS           ***     /oradata/BANANA/datafile/PROD001_ts_04.dbf
11   30720    PROD001_TS           ***     /oradata/BANANA/datafile/PROD001_ts_03.dbf
12   32767    PROD001_TS           ***     /oradata/BANANA/datafile/PROD001_ts_02.dbf
13   30720    PROD001_TS           ***     /oradata/BANANA/datafile/PROD001_ts_01.dbf
14   25536    PROD001_TS           ***     /oradata/BANANA/datafile/PROD001_ts_06.dbf
15   25472    PROD001_TS           ***     /oradata/BANANA/datafile/PROD001_ts_07.dbf
16   25920    PROD001_TS           ***     /oradata/BANANA/datafile/PROD001_ts_08.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
3    4807     TEMP                 30720       /oradata/BANANA/datafile/temp_01.db

Recovery Manager complete.
oracle@dbhost01 ~/working/dinh/rman_restore (BANANA) $
Advertisement

January 22, 2022

SIMPLIFY A COMPLICATED PROCESS USING SED

Filed under: automation,awk_sed_grep,shell scripting — mdinh @ 4:08 am

For every PDB, there is a perl script used to report tablespace free the that PDB.

While I am not able to change how the process was implemented, I can make it easier.

Here is the current process.

Edit the script "tablespace_free_PDB_NAME.pl".

Change the PDB name at the below SQL command:
"alter session set container=<Your PDB name_1>"
 
Rename the script to match your PDB name 
like "tablespace_free_<Your PDB name>.pl".

The above instructions work but is time consuming, not scalable, and error prone.

Here’s a demo how to simplify the process.

1. Create template temp_tablespace_free_PDB.pl:

[oracle@ol7-19-dg1 ~]$ cat temp_tablespace_free_PDB.pl
alter session set container=vPDB;

2. Export variable PDB with <Your PDB name>

[oracle@ol7-19-dg1 ~]$ export PDB=SOAP

3. Create tablespace_free_<Your PDB name>.pl

[oracle@ol7-19-dg1 ~]$ ls tablespace_free_$PDB.pl
ls: cannot access tablespace_free_SOAP.pl: No such file or directory

[oracle@ol7-19-dg1 ~]$ sed "s/vPDB/$PDB/g" temp_tablespace_free_PDB.pl > tablespace_free_$PDB.pl

[oracle@ol7-19-dg1 ~]$ ls tablespace_free_$PDB.pl
tablespace_free_SOAP.pl
[oracle@ol7-19-dg1 ~]$

[oracle@ol7-19-dg1 ~]$ cat tablespace_free_$PDB.pl
alter session set container=SOAP;
[oracle@ol7-19-dg1 ~]$

The above solution is better but far from perfect.

If there are a dozen PDBs to implement, then manual work will have to be done a dozen time.

Here is an example using array and for loop.

There are 2 PDBs: SOAP and SCUM.

Here is the template:

[oracle@ol7-19-dg1 ~]$ cat temp_tablespace_free_PDB.pl
alter session set container=vPDB;
select sysdate from dual;
[oracle@ol7-19-dg1 ~]$

1. Create script to loop through list of PDBs:

[oracle@ol7-19-dg1 ~]$ cat create_tablespace_free_PDB.sh
#!/bin/bash
array=( SOAP SCUM )
for i in "${array[@]}"
do
  echo "$i"
  export PDB=$i
  sed "s/vPDB/$PDB/g" temp_tablespace_free_PDB.pl > tablespace_free_$PDB.pl
  ls -l tablespace_free_$PDB.pl
done
exit
[oracle@ol7-19-dg1 ~]$

2. Run create_tablespace_free_PDB.sh:

[oracle@ol7-19-dg1 ~]$ ./create_tablespace_free_PDB.sh
SOAP
-rw-r--r--. 1 oracle oinstall 60 Jan 22 03:52 tablespace_free_SOAP.pl
SCUM
-rw-r--r--. 1 oracle oinstall 60 Jan 22 03:52 tablespace_free_SCUM.pl
[oracle@ol7-19-dg1 ~]$

3. Review results:

[oracle@ol7-19-dg1 ~]$ cat tablespace_free_SOAP.pl
alter session set container=SOAP;
select sysdate from dual;
[oracle@ol7-19-dg1 ~]$

[oracle@ol7-19-dg1 ~]$ cat tablespace_free_SCUM.pl
alter session set container=SCUM;
select sysdate from dual;
[oracle@ol7-19-dg1 ~]$

January 13, 2022

COMPLICATED COMMINGLED DATABASE ENVIRONMENT

Filed under: awk_sed_grep,linux,RMAN,shell scripting — mdinh @ 11:16 pm

I have been reviewing RMAN RAC backup for environment having a total of 15 non-production and production databases on the same host excluding APX and MGMTDB.

That’s not a big deal, as I have once had to managed 28 databases residing on the same host, right?

I am just too lazy and too tedious to change RMAN configuration one database at a time.

Luckily, there is a convention where non-production instances ends with T1 and production instances ends with P1.

This allows me to make the same changes to non-production and production in 2 steps.

Goal is to configure RMAN PARALLELISM 2 for NON-PROD and PARALLELISM 4 for PROD and consistent RECOVERY WINDOW OF 14 DAYS.

### Current configuration is inconsistent across databases:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;

CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 2;
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 4;

====================
### NON-PROD: 
====================

--- RMAN commands: cat configure.rman:
set echo on
connect target;
show all;
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 2;
show all;
exit

--- Let's make sure the instances are correct:
$ ps -ef|grep [p]mon|egrep -v 'ASM|APX|MGMTDB'|cut -d_ -f3|grep -Ev '\P1'|sort
DB01T1
DB02T1
DB03T1

--- Make the change:
$ for db in $(ps -ef|grep [p]mon|egrep -v 'ASM|APX|MGMTDB'|cut -d_ -f3|grep -Ev '\P1'|sort); do echo 'RMAN configure' $db; . oraenv <<< $db; rman @configure.rman; done;

====================
### PROD:
====================

--- RMAN commands: cat configure.rman:
set echo on
connect target;
show all;
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 4;
show all;
exit

--- Let's make sure the instances are correct:
$ ps -ef|grep [p]mon|egrep -v 'ASM|APX|MGMTDB'|cut -d_ -f3|grep -E '\P1'|sort
DB01P1
DB02P1
DB03P1

--- Make the change:
$ for db in $(ps -ef|grep [p]mon|egrep -v 'ASM|APX|MGMTDB'|cut -d_ -f3|grep -E '\P1'|sort); do echo 'RMAN configure' $db; . oraenv <<< $db; rman @configure.rman; done;

January 10, 2022

HOW TO LOAD BALANCE RMAN RAC DATABASE BACKUP

Filed under: awk_sed_grep,RAC,RMAN — mdinh @ 11:49 pm

First, I will share the incorrect method since it is hard coded.

CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT 'sys/passwd@inst1';
CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT 'sys/passwd@inst1';
CONFIGURE CHANNEL 3 DEVICE TYPE DISK CONNECT 'sys/passwd@inst2';
CONFIGURE CHANNEL 4 DEVICE TYPE DISK CONNECT 'sys/passwd@inst2';

The goal is to configure RMAN backup with parallel 4 and load balance.

CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK CONNECT 'sys/***@DB_UNIQUE_NAME';

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jan 10 17:24:15 2022

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DB_NAME (DBID=453022715)

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name DB_UNIQUE_NAME are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/mnt/backups/DB_NAME/%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK CONNECT '*';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+DB_NAME_DATA/DB_UNIQUE_NAME/controlfile/snapcf_DB_NAME.f';

RMAN>

It’s that easy. Changing parallelism will automatically load balance across all nodes.

Here is an example where parallelism is configured and backup is not load balance.

All the channels are allocated to node1.

[oracle@host01 log]$ grep 'channel ORA_DISK_[1-9]: SID' backup_HAWK_level1_202201010300_Sat.log

channel ORA_DISK_1: SID=760 instance=HAWK1 device type=DISK
channel ORA_DISK_2: SID=956 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=1139 instance=HAWK1 device type=DISK
channel ORA_DISK_4: SID=1331 instance=HAWK1 device type=DISK

channel ORA_DISK_1: SID=760 instance=HAWK1 device type=DISK
channel ORA_DISK_2: SID=956 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=1139 instance=HAWK1 device type=DISK
channel ORA_DISK_4: SID=1331 instance=HAWK1 device type=DISK

channel ORA_DISK_1: SID=760 instance=HAWK1 device type=DISK
channel ORA_DISK_2: SID=956 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=1139 instance=HAWK1 device type=DISK
channel ORA_DISK_4: SID=1331 instance=HAWK1 device type=DISK

[oracle@host01 log]$

Here is the correct way and let the database determine the node.

[oracle@host01 log]$ grep 'channel ORA_DISK_[1-9]: SID' backup_HAWK_level1_202201101400_Mon.log

channel ORA_DISK_1: SID=199 instance=HAWK2 device type=DISK
channel ORA_DISK_2: SID=2469 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=196 instance=HAWK1 device type=DISK
channel ORA_DISK_4: SID=1139 instance=HAWK2 device type=DISK

channel ORA_DISK_1: SID=2469 instance=HAWK1 device type=DISK
channel ORA_DISK_2: SID=196 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=1139 instance=HAWK2 device type=DISK
channel ORA_DISK_4: SID=199 instance=HAWK2 device type=DISK

channel ORA_DISK_1: SID=2469 instance=HAWK1 device type=DISK
channel ORA_DISK_2: SID=196 instance=HAWK1 device type=DISK
channel ORA_DISK_3: SID=1139 instance=HAWK2 device type=DISK
channel ORA_DISK_4: SID=199 instance=HAWK2 device type=DISK

[oracle@host01 log]$

April 6, 2021

Detect Linux Host Restart

Filed under: awk_sed_grep,linux,shell scripting — mdinh @ 3:15 am

Sometime ago I had blogged about Monitor Linux Host Restart

The simple solution: How to email admins automatically after a Linux server starts?

Here is the example from root’s cron:

# crontab -l
@reboot su oracle -c '/home/oracle/scripts/host_restart_alert.sh' > /tmp/host_restart_alert.out 2>&1

Shell script is used because mail cannot be sent from local host and will need to be sent from remote host.

#!/bin/bash -x
MAILFROM=
MAILTO=
SUBJECT="Node reboot detected for $(hostname)"
EMAILMESSAGE="$(hostname) was restarted `uptime -p| awk -F'up' '{print $2}'` ago at `uptime -s`"

# uptime reports minutely and need to sleep for at least 60s after host restart
sleep 63

ssh oracle@remotehost /bin/bash <<EOF
/home/oracle/scripts/send_email.sh "$EMAILMESSAGE" "$SUBJECT" "$MAILFROM" "$MAILTO"
EOF

exit

Why is there a need to detect host restart and isn’t there monitoring for the host?

This is Oracle Exadata Cloud@Customer (ExaCC) environment.

When Oracle support performs patching, they do not provide any sort of communication or status and monitoring is disable for all hosts beforehand.

OPatchAuto to Patch a GI/RAC Environment.

After the patching is complete and your servers are restarted, you should check your product software to verify that the issue has been resolved.

This is why there is a need to detect and be notified for server restart.

March 27, 2021

Cleanup Trace Files For Multiple Oracle Homes

Filed under: adrci,awk_sed_grep,linux,oracle — mdinh @ 4:36 pm

I know what you are probably thinking. What’s the big deal and how many homes can there be?

For Exadata Cloud, I recalled seeing as many as 18 database homes.

As shown below, there are 5 database homes with version 12.2 and 1 database home with version 19.0.

# dbaascli dbhome info
DBAAS CLI version 21.1.1.0.1
Executing command dbhome info
Enter a homename or just press enter if you want details of all homes

1.HOME_NAME=OraHome101
  HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_4
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0.200714
  DBs installed=
   OH Backup=NOT Configured 

2.HOME_NAME=OraHome100
  HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_7
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0.200714
  DBs installed=*****
   Agent DB IDs=d21b07df-20f2-439e-bc40-78a9597af362
 OH Backup=NOT Configured

3.HOME_NAME=OraHome105_12201_dbru200714_0
  HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_6
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0.200714
  DBs installed=******
   Agent DB IDs=f7d46615-a223-4002-9270-fa69465a7f2a
 OH Backup=NOT Configured

4.HOME_NAME=OraHome102_12201_dbru200714_0
  HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_3
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0.200714
  DBs installed=*****
   Agent DB IDs=dceed071-9655-4c84-bef4-74b20180c99b
 OH Backup=NOT Configured

5.HOME_NAME=OraHome101_12201_dbru200714_0
  HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_2
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0.200714
  DBs installed=*******
   Agent DB IDs=b2a5220d-844b-49b6-9351-7c72cf3c9d9b
 OH Backup=NOT Configured

6.HOME_NAME=OraHome100_19800_dbru200714_0
  HOME_LOC=/u02/app/oracle/product/19.0.0.0/dbhome_2
  VERSION=19.8.0.0
  PATCH_LEVEL=19.8.0.0
  DBs installed=********
   Agent DB IDs=feedb0e0-2d10-4db7-997a-a78e4ab083ef

Checking oratab for Oracle Homes

$ sort -u -t : -k 2,2 /etc/oratab | grep -v "^#" | awk -F ":" '{print $2}'
/u01/app/19.0.0.0/grid
/u02/app/oracle/product/12.2.0/dbhome_2
/u02/app/oracle/product/12.2.0/dbhome_3
/u02/app/oracle/product/12.2.0/dbhome_4
/u02/app/oracle/product/12.2.0/dbhome_6
/u02/app/oracle/product/12.2.0/dbhome_7
/u02/app/oracle/product/19.0.0.0/dbhome_2

Here is the crontab schedule:

00 01 * * * find /u01/app/grid/diag/crs/*/crs/trace -name "*.tr?" -mtime +30 -exec rm -f {} \;
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/audit -name "*.aud" -mtime +366 -exec rm -f {} \;
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "*.tr?" -mtime +200 -exec rm -f {} \;
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "cdmp*" -mtime +200 -exec rm -rf {} \;
00 04 * * * find /u02/app/oracle/diag/rdbms/*/*/cdump -name "core*" -mtime +200 -exec rm -rf {} \;

Here is the explanation for what (*) represents and examples:

00 01 * * * find /u01/app/grid/diag/crs/*/crs/trace -name "*.tr?" -mtime +30 -exec rm -f {} \;

ls -ld /u01/app/grid/diag/crs/*/crs/trace
* = hostname

Example:
$ ls -ld /u01/app/grid/diag/crs/*/crs/trace
drwxrwxr-x 2 grid oinstall 135168 Mar 26 18:40 /u01/app/grid/diag/crs/hostname/crs/trace

==============================

00 01 * * * find /u02/app/oracle/product/*/*/rdbms/audit -name "*.aud" -mtime +366 -exec rm -f {} \;

ls -ld /u02/app/oracle/product/*/*/rdbms/audit
*/* = version/dbhome

Example:
$ ls -ld /u02/app/oracle/product/*/*/rdbms/audit
drwxr-xr-x 9 oracle oinstall  614400 Mar 26 18:32 /u02/app/oracle/product/12.2.0/dbhome_2/rdbms/audit
drwxr-xr-x 2 oracle oinstall  253952 Mar 26 18:40 /u02/app/oracle/product/12.2.0/dbhome_3/rdbms/audit
drwxr-xr-x 2 oracle oinstall  294912 Mar 26 18:32 /u02/app/oracle/product/12.2.0/dbhome_4/rdbms/audit
drwxr-xr-x 4 oracle oinstall   94208 Mar 26 18:32 /u02/app/oracle/product/12.2.0/dbhome_6/rdbms/audit
drwxr-xr-x 2 oracle oinstall    4096 Mar  1 02:31 /u02/app/oracle/product/12.2.0/dbhome_7/rdbms/audit
drwxr-xr-x 3 oracle oinstall 5783552 Mar 26 18:32 /u02/app/oracle/product/19.0.0.0/dbhome_2/rdbms/audit

==============================

00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "*.tr?" -mtime +200 -exec rm -f {} \;

ls -l /u02/app/oracle/product/*/*/rdbms/log/*.tr?
*/* = version/dbhome

Example:
$ ls -l /u02/app/oracle/product/*/*/rdbms/log/*.tr?
-rw-r----- 1 oracle asmadmin 868 Feb 19 17:41 /u02/app/oracle/product/12.2.0/dbhome_2/rdbms/log/*******2_ora_57506.trc
-rw-r----- 1 oracle asmadmin 868 Dec  4 18:06 /u02/app/oracle/product/12.2.0/dbhome_2/rdbms/log/*******2_ora_66404.trc
-rw-r----- 1 oracle asmadmin 862 Mar 24 19:38 /u02/app/oracle/product/12.2.0/dbhome_3/rdbms/log/*****2_ora_217755.trc
-rw-r----- 1 oracle asmadmin 869 Feb 18 21:51 /u02/app/oracle/product/12.2.0/dbhome_4/rdbms/log/*****2_ora_351349.trc
-rw-r----- 1 oracle asmadmin 867 Feb 19 17:41 /u02/app/oracle/product/12.2.0/dbhome_4/rdbms/log/*****2_ora_57519.trc
-rw-r----- 1 oracle asmadmin 866 Mar  1 20:01 /u02/app/oracle/product/12.2.0/dbhome_6/rdbms/log/******2_ora_167170.trc
-rw-r----- 1 oracle asmadmin 831 Mar  1 02:31 /u02/app/oracle/product/12.2.0/dbhome_7/rdbms/log/*****2_ora_314160.trc

==============================

00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "cdmp*" -mtime +200 -exec rm -rf {} \;

ls -ld /u02/app/oracle/diag/rdbms/*/*/cdump
*/* = db_unique_name/db_name

Example:
$ ls -ld /u02/app/oracle/diag/rdbms/*/*/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep  3  2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep  2  2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 21  2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Feb 17 02:35 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 21  2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Feb 18 21:51 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 25 07:13 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump

It’s also possible to use adrci to configure SHORTP_POLICY and LONGP_POLICY.

If new homes are created, then would SHORTP_POLICY and LONGP_POLICY need up be updated for the new homes?

Alternatively, can download and use purgeLogs: Cleanup traces, logs in one command (Doc ID 2081655.1)

February 16, 2021

Monitor Linux Host Restart

Filed under: awk_sed_grep,shell scripting — mdinh @ 8:13 pm

The application is not RAC-aware and cannot handle ORA-3113, ORA-25402, or ORA-25409 properly.

Hence, there is requirement to notify the application team to restart the application when database server is restarted.

Initial implementation to monitor reboot was to use cronjob from oracle running every 5m to detect server restart.

While the implementation is effective, it’s not efficient. This was my first attempt.

The script detects if server was restarted X seconds ago by checking /proc/uptime.

If uptime is less than X seconds, then send notification server was restarted.

Here is high level example:

### Scripts accept paramenter with values for seconds
$ /home/oracle/scripts/last_reboot.sh
/home/oracle/scripts/last_reboot.sh: line 10: 1: ---> USAGE: /home/oracle/scripts/last_reboot.sh [in seconds]

### The heart of the script is to check /proc/uptime in seconds
$ egrep -o '^[0-9]+' /proc/uptime
2132607

### Scheduled cron tab to run every 5 minute to determine if server uptime is less that 540 seconds and send notification.
$ crontab -l|grep reboot
##### monitor node reboot #####
*/5 * * * * /home/oracle/scripts/last_reboot.sh 540 > /tmp/last_reboot.cron 2>&1

A more efficient implementation is to run a cronjob automatically after the server restart.

Here is high level example:

### When server is restarted, host_restart_alert.sh will be executed
[root@oracle-12201-vagrant ~]# crontab -l
@reboot su oracle -c '/home/oracle/host_restart_alert.sh' > /tmp/host_restart_alert.out 2>&1

### Here is host_restart_alert.sh
[oracle@oracle-12201-vagrant ~]$ cat host_restart_alert.sh
#!/bin/bash -x
# Script ie being called from root crontab
# uptime reports minutely and need to sleep for at least 60s after host restart
sleep 63
EMAILMESSAGE="$(hostname) was restarted `uptime -p| awk -F'up' '{print $2}'` ago at `uptime -s`"
echo $EMAILMESSAGE > /tmp/restart_$HOSTNAME.log
exit

### Comment from colleague:
### From a bash syntax perspective, it’s not wrong. It’s not great style (don’t use backticks)
printf -v EMAILMESSAGE '%s was restarted %s ago at %s' \
"$(hostname)" \
"$(uptime -p| awk -F'up' '{print $2}')" \
"$(uptime -s)"
echo $EMAILMESSAGE > /tmp/restart_$HOSTNAME.log

### Deconstructing uptime commands:
[oracle@oracle-12201-vagrant ~]$ uptime -p
up 17 hours, 28 minutes

[oracle@oracle-12201-vagrant ~]$ uptime -s
2021-02-15 18:00:51

### Deconstructing message sent:
[oracle@oracle-12201-vagrant ~]$ echo "$HOSTNAME was restarted `uptime -p| awk -F'up' '{print $2}'` ago at `uptime -s`"
oracle-12201-vagrant was restarted  17 hours, 28 minutes ago at 2021-02-15 18:00:51

### Demo:
[root@oracle-12201-vagrant ~]# date
 Tue Feb 16 14:51:18 -05 2021

[root@oracle-12201-vagrant ~]# uptime
  14:51:22 up 1 min,  1 user,  load average: 0.58, 0.23, 0.08

[root@oracle-12201-vagrant ~]# ls -l /tmp/restart
 -rw-r--r--. 1 root   root     271 Feb 16 14:51 /tmp/host_restart_alert.out
 -rw-r--r--. 1 oracle oinstall  71 Feb 16 14:51 /tmp/restart_oracle-12201-vagrant.log

[root@oracle-12201-vagrant ~]# cat /tmp/host_restart_alert.out
 sleep 63
 ++ hostname
 ++ uptime -p
 ++ awk -Fup '{print $2}'
 ++ uptime -s
 printf -v EMAILMESSAGE '%s was restarted %s ago at %s' oracle-12201-vagrant ' 1 minute' '2021-02-16 14:50:02'
 echo oracle-12201-vagrant was restarted 1 minute ago at 2021-02-16 14:50:02
 exit 

[root@oracle-12201-vagrant ~]# cat /tmp/restart_oracle-12201-vagrant.log
 oracle-12201-vagrant was restarted 1 minute ago at 2021-02-16 14:50:02
[root@oracle-12201-vagrant ~]#

Scripts were tested on Oracle Linux Server release 7.8  and 7.9.

February 5, 2021

Using sed To Search And Replace

Filed under: awk_sed_grep — mdinh @ 3:27 am

The goal is to replace me@gmail.com with dba@gmail.com for all shell scripts.

Fortunately, all shell scripts are located from one directory; otherwise, will need to find all locations.

Check crontab to find possible directory location for shell scripts.

[vagrant@oracle-12201-vagrant ~]$ crontab -l
5 4 * * * /home/vagrant/scripts/test.sh something > /tmp/test.out 2>&1
[vagrant@oracle-12201-vagrant ~]$

[vagrant@oracle-12201-vagrant ~]$ crontab -l|grep -v '#'|grep sh|awk '{print $6}'|sort -u
/home/vagrant/scripts/test.sh
[vagrant@oracle-12201-vagrant ~]$

Check directory for shell scripts.

[vagrant@oracle-12201-vagrant scripts]$ ls -l
total 12
-rwxrwxr-x. 1 vagrant vagrant  25 Feb  4 21:15 dt.sh
-rwxrwxr-x. 1 vagrant vagrant  20 Feb  4 21:14 test.sh
[vagrant@oracle-12201-vagrant scripts]$

Check shell scripts containing emails to modify.

[vagrant@oracle-12201-vagrant scripts]$ grep 'me@gmail.com' *.sh|grep sh|awk -F':' '{print $1}'|sort -u|grep -v edit_email.sh
dt.sh
test.sh
[vagrant@oracle-12201-vagrant scripts]$

Create edit_email.sh to modify email.

[vagrant@oracle-12201-vagrant scripts]$ cat edit_email.sh
for infile in $(grep 'me@gmail.com' *.sh|grep sh|awk -F':' '{print $1}'|sort -u|grep -v `basename $0`)
do
  echo $infile
  sed 's/\bme@gmail.com\b/dba@gmail.com/g' $infile > tmp.$$
  mv tmp.$$ $infile
  chmod 755 $infile
  grep 'gmail.com' $infile
done
[vagrant@oracle-12201-vagrant scripts]$

Run edit_email.sh and verify results.

[vagrant@oracle-12201-vagrant scripts]$ ./edit_email.sh
dt.sh
echo dba@gmail.com
test.sh
export PAGER_EMAIL="dba@gmail.com"
[vagrant@oracle-12201-vagrant scripts]$

[vagrant@oracle-12201-vagrant scripts]$ grep 'me@gmail.com' *.sh|grep sh|awk -F':' '{print $1}'|sort -u|grep -v edit_email.sh

Here is an improvement for the code thanks to Jared Still

Filter basename before sort.

Use grep -il

[vagrant@oracle-12201-vagrant scripts]$ cat e.sh
for infile in $(grep 'me@gmail.com' *.sh|grep sh|awk -F':' '{print $1}'|grep -v basename $0|sort -u)
do
echo $infile
done
for infile in $(grep -il 'me@gmail.com' *.sh 2>/dev/null | grep -v $(basename $0) | sort -u )
do
echo $infile
done

August 6, 2019

19c Grid Dry-Run Upgrade

Filed under: 19c,awk_sed_grep,Grid Infrastructure,upgrade — mdinh @ 12:42 pm

First test using GUI.

[oracle@racnode-dc2-1 grid]$ /u01/app/19.3.0.0/grid/gridSetup.sh -dryRunForUpgrade
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_00-20-31AM/gridSetupActions2019-08-06_00-20-31AM.log
[oracle@racnode-dc2-1 grid]$

Create dryRunForUpgradegrid.rsp from grid_2019-08-06_00-20-31AM.rsp (above GUI test)

[oracle@racnode-dc2-1 grid]$ grep -v "^#" /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp | grep -v "=$" | awk 'NF' > /home/oracle/dryRunForUpgradegrid.rsp

[oracle@racnode-dc2-1 ~]$ cat /home/oracle/dryRunForUpgradegrid.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=vbox-rac-dc2
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=racnode-dc2-1:,racnode-dc2-2:
oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=CRS
oracle.install.asm.diskGroup.AUSize=0
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=false
[oracle@racnode-dc2-1 ~]$

Create directory grid home for all nodes:

[root@racnode-dc2-1 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54318(asmdba),54322(dba),54323(backupdba),54324(oper),54325(dgdba),54326(kmdba)

[root@racnode-dc2-1 ~]# mkdir -p /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chown oracle:oinstall /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chmod 775 /u01/app/19.3.0.0/grid

[root@racnode-dc2-1 ~]# ll /u01/app/19.3.0.0/
total 4
drwxrwxr-x 2 oracle oinstall 4096 Aug  6 02:07 grid
[root@racnode-dc2-1 ~]#

Extract grid software for node1 ONLY:

[oracle@racnode-dc2-1 ~]$ unzip -qo /media/swrepo/LINUX.X64_193000_grid_home.zip -d /u01/app/19.3.0.0/grid/

[oracle@racnode-dc2-1 ~]$ ls /u01/app/19.3.0.0/grid/
addnode     clone  dbjava     diagnostics  gpnp          install        jdbc  lib      OPatch   ords  perl     qos       rhp            rootupgrade.sh  sqlpatch  tomcat  welcome.html  xdk
assistants  crs    dbs        dmu          gridSetup.sh  instantclient  jdk   md       opmn     oss   plsql    racg      root.sh        runcluvfy.sh    sqlplus   ucp     wlm
bin         css    deinstall  env.ora      has           inventory      jlib  network  oracore  oui   precomp  rdbms     root.sh.old    sdk             srvm      usm     wwg
cha         cv     demo       evm          hs            javavm         ldap  nls      ord      owm   QOpatch  relnotes  root.sh.old.1  slax            suptools  utl     xag

[oracle@racnode-dc2-1 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.0G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-1 ~]$

Run gridSetup.sh -silent -dryRunForUpgrade:

[oracle@racnode-dc2-1 ~]$ env|grep -i ora
USER=oracle
MAIL=/var/spool/mail/oracle
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
HOME=/home/oracle
LOGNAME=oracle

[oracle@racnode-dc2-1 ~]$ date
Tue Aug  6 02:35:47 CEST 2019

[oracle@racnode-dc2-1 ~]$ /u01/app/19.3.0.0/grid/gridSetup.sh -silent -dryRunForUpgrade -responseFile /home/oracle/dryRunForUpgradegrid.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_02-35-52AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log


As a root user, execute the following script(s):
        1. /u01/app/19.3.0.0/grid/rootupgrade.sh

Execute /u01/app/19.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc2-1]

Run the script on the local node.

Successfully Setup Software with warning(s).
[oracle@racnode-dc2-1 ~]$

Run rootupgrade.sh for node1 ONLY and review log:

[root@racnode-dc2-1 ~]# /u01/app/19.3.0.0/grid/rootupgrade.sh
Check /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log for the output of root script

[root@racnode-dc2-1 ~]# cat /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Performing Dry run of the Grid Infrastructure upgrade.
Using configuration parameter file: /u01/app/19.3.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/racnode-dc2-1/crsconfig/rootcrs_racnode-dc2-1_2019-08-06_02-45-31AM.log
2019/08/06 02:45:44 CLSRSC-464: Starting retrieval of the cluster configuration data
2019/08/06 02:45:52 CLSRSC-729: Checking whether CRS entities are ready for upgrade, cluster upgrade will not be attempted now. This operation may take a few minutes.
2019/08/06 02:47:56 CLSRSC-693: CRS entities validation completed successfully.
[root@racnode-dc2-1 ~]#

Check grid home for node2:

[oracle@racnode-dc2-2 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.6G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-2 ~]$

Check oraInventory for ALL nodes:

[oracle@racnode-dc2-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.7.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.2.0.1/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.2.0.1/db1" TYPE="O" IDX="2"/>
==========================================================================================
<HOME NAME="OraGI19Home1" LOC="/u01/app/19.3.0.0/grid" TYPE="O" IDX="3"/>
==========================================================================================
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc2-2 ~]$

Check crs activeversion: 12.2.0.1.0

[oracle@racnode-dc2-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc2-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [927320293].
[oracle@racnode-dc2-1 ~]$

Check log location:

[oracle@racnode-dc2-1 ~]$ cd /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/

[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$ ls -alrt
total 17420
-rw-r-----  1 oracle oinstall     129 Aug  6 02:35 installerPatchActions_2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall       0 Aug  6 02:35 gridSetupActions2019-08-06_02-35-52AM.err
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:35 temp_ob
-rw-r-----  1 oracle oinstall       0 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.err
drwxrwx--- 17 oracle oinstall    4096 Aug  6 02:39 ..
-rw-r-----  1 oracle oinstall     157 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall       0 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.err.racnode-dc2-2
-rw-r-----  1 oracle oinstall     142 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.out.racnode-dc2-2
-rw-r-----  1 oracle oinstall 9341920 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall   13419 Aug  6 02:43 time2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall 8443087 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.log
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:56 .
[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$

After dryRunForUpgrade, detach 19.3.0.0 grid home and remove directory (19.3.0.0/grid) from all nodes.

export ORACLE_HOME=/u01/app/19.3.0.0/grid
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$ORACLE_HOME

April 14, 2019

Update Override OPatch

Filed under: awk_sed_grep — mdinh @ 1:47 pm

Framework to source GI/DB RAC environment stored on shared volume.

[oracle@racnode-dc1-2 patch]$ df -h |grep patch
media_patch              3.7T  442G  3.3T  12% /media/patch

[oracle@racnode-dc1-2 patch]$ ps -ef|grep pmon
oracle    3268  2216  0 15:37 pts/0    00:00:00 grep --color=auto pmon
oracle   11254     1  0 06:33 ?        00:00:02 ora_pmon_hawk2
oracle   19995     1  0 05:52 ?        00:00:02 asm_pmon_+ASM2

[oracle@racnode-dc1-2 patch]$ cat /etc/oratab
+ASM2:/u01/app/12.1.0.1/grid:N
hawk2:/u01/app/oracle/12.1.0.1/db1:N

[oracle@racnode-dc1-2 patch]$ cat gi.env
### Michael Dinh : Mar 26, 2019
### Source RAC GI environment
### Prerequisites for hostname: last char from hostname must be digit
### Allow: prodhost01, racnode-dc1-1
### DisAllow: prod01host
set +x
unset ORACLE_UNQNAME
ORAENV_ASK=NO
h=$(hostname -s)
### Extract last character from hostname to create ORACLE_SID
export ORACLE_SID=+ASM${h:${#h} - 1}
. oraenv <<< $ORACLE_SID
export GRID_HOME=$ORACLE_HOME
env|egrep 'ORA|GRID'
sysresv|tail -1

[oracle@racnode-dc1-2 patch]$ cat hawk.env
### Michael Dinh : Mar 26, 2019
### Source RAC DB environment
### Prerequisites for hostname: last char from hostname must be digit
### Allow: prodhost01, racnode-dc1-1
### DisAllow: prod01host
set +x
unset GRID_HOME
h=$(hostname -s)
### Extract filename without extension (.env)
ORAENV_ASK=NO
export ORACLE_UNQNAME=$(basename $BASH_SOURCE .env)
### Extract last character from hostname to create ORACLE_SID
export ORACLE_SID=$ORACLE_UNQNAME${h:${#h} - 1}
. oraenv <<< $ORACLE_SID
env|egrep 'ORA|GRID'
sysresv|tail -1
[oracle@racnode-dc1-2 patch]$

update_opatch.sh

#!/bin/sh -x
update_opatch()
{
set -ex
cd $ORACLE_HOME
$ORACLE_HOME/OPatch/opatch version
unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip ; echo $?
$ORACLE_HOME/OPatch/opatch version
}
ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
. /media/patch/gi.env
update_opatch
. /media/patch/hawk.env
update_opatch
exit

Run update_opatch.sh

[oracle@racnode-dc1-1 patch]$ ./update_opatch.sh
+ ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
-rwxrwxrwx 1 vagrant vagrant 107M Feb  1 22:08 /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ . /media/patch/gi.env
++ set +x
The Oracle base has been changed from hawk1 to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"
+ cd /u01/app/12.1.0.1/grid
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ cd /u01/app/oracle/12.1.0.1/db1
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 patch]$


[oracle@racnode-dc1-2 patch]$ ./update_opatch.sh
+ ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
-rwxrwxrwx 1 vagrant vagrant 107M Feb  1 22:08 /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"
+ cd /u01/app/12.1.0.1/grid
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ cd /u01/app/oracle/12.1.0.1/db1
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ exit
[oracle@racnode-dc1-2 patch]$
Next Page »

Create a free website or blog at WordPress.com.