Thinking Out Loud

July 22, 2018

Cluster Resource To Check When Patching RAC DBFS OGG

Filed under: GoldenGate,Grid Infrastructure,RAC — mdinh @ 2:41 pm

crsctl stat res|grep -i type|sort -u

TYPE=app.appvipx.type
TYPE=local_resource
TYPE=ora.asm.type
TYPE=ora.cluster_vip_net1.type
TYPE=ora.cvu.type
TYPE=ora.database.type
TYPE=ora.diskgroup.type
TYPE=ora.listener.type
TYPE=ora.mgmtdb.type
TYPE=ora.mgmtlsnr.type
TYPE=ora.network.type
TYPE=ora.oc4j.type
TYPE=ora.ons.type
TYPE=ora.scan_listener.type
TYPE=ora.scan_vip.type
TYPE=xag.goldengate.type


crsctl stat res -p -w 'TYPE = ora.database.type' | egrep '^NAME|AUTO_START'

crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'

crsctl stat res -t -w 'TYPE = xag.goldengate.type' -- OGG Resource
crsctl stat res -t -w 'TYPE = app.appvipx.type'    -- OGG VIP
crsctl stat res -t -w 'TYPE = local_resource'      -- DBFS Mount
crsctl stat res -t -w 'TYPE = ora.database.type'   -- DB resource (including DBFS)

You might ask, why not use crsctl stat res -t?

For this specific environment, there are 190 lines of output and needed to focus on what’s important.

Advertisements

July 20, 2018

Patching GoldenGate with DBFS

Filed under: GoldenGate,Grid Infrastructure,RAC — mdinh @ 11:41 pm

There seems to be no consistency as to what directories should be on DBFS for when GoldenGate is implemented with RAC.

Here I will share my thoughts based on issues encountered.

oracle@test1:/opt/oracle/12.2.0/ggs01$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.170221 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_170123.1033_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Jan 23 2017 21:54:15
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.



GGSCI (test1) 1> create subdirs

Creating subdirectories under current directory /oracle/12.2.0/ggs01

Parameter files                /oracle/12.2.0/ggs01/dirprm: created
Report files                   /oracle/12.2.0/ggs01/dirrpt: created
Checkpoint files               /oracle/12.2.0/ggs01/dirchk: created
Process status files           /oracle/12.2.0/ggs01/dirpcs: created
SQL script files               /oracle/12.2.0/ggs01/dirsql: created
Database definitions files     /oracle/12.2.0/ggs01/dirdef: created
Extract data files             /oracle/12.2.0/ggs01/dirdat: created
Temporary files                /oracle/12.2.0/ggs01/dirtmp: created
Credential store files         /oracle/12.2.0/ggs01/dircrd: created
Masterkey wallet files         /oracle/12.2.0/ggs01/dirwlt: created
Dump files                     /oracle/12.2.0/ggs01/dirdmp: created


GGSCI (test1) 2> 


$ ls -ld dir*
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirchk -> /dbfs_client/ggs01/dirchk
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dircrd -> /dbfs_client/ggs01/dircrd
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirdat -> /dbfs_client/ggs01/dirdat
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirdef -> /dbfs_client/ggs01/dirdef
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirdmp -> /dbfs_client/ggs01/dirdmp
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirout -> /dbfs_client/ggs01/dirout
drwxr-x--- 2 ggsuser oinstall 4096 Mar 20  2017 dirpcs
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirprm -> /dbfs_client/ggs01/dirprm
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirrpt -> /dbfs_client/ggs01/dirrpt
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirsql -> /dbfs_client/ggs01/dirsql

GoldenGate maintains data that it swaps to disk in dirtmp.
With all the issues for DBFS, might be better on local.
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirtmp -> /dbfs_client/ggs01/dirtmp

lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirwlt -> /dbfs_client/ggs01/dirwlt
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirwww -> /dbfs_client/ggs01/dirwww
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 BR -> /dbfs_client/ggs01/BR

Here are errors when applying GoldenGate Patchset.

The errors were due to the stack being down from after running opatchauto apply -norestart which results in DBFS offline for the instance.

Errors can be avoided if directories are local as they should be.

The following actions have failed:
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirout
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style

Use Oracle RAC database as a baseline.
Are alert logs, trace files, etc… on shared volume if Oracle software is installed locally?

July 19, 2018

Playing With Service Relocation 12c

Filed under: 12c,RAC — mdinh @ 2:14 pm
With 12c, use verbose to display services running.
[oracle@racnode-dc1-1 rac_relocate]$ srvctl -V
srvctl version: 12.1.0.2.0

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status instance -d hawk -i hawk1 -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status instance -d hawk -i hawk2 -v
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

There is option to provide comma delimited list of services to check the status.
Unfortunately, option is not available for relocation which I failed to understand.
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status service -d hawk -s "p11,p12,p13,p14"
Service p11 is running on instance(s) hawk1
Service p12 is running on instance(s) hawk1
Service p13 is running on instance(s) hawk1
Service p14 is running on instance(s) hawk1

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status service -d hawk -s "p21,p22,p23,p24,p25"
Service p21 is running on instance(s) hawk2
Service p22 is running on instance(s) hawk2
Service p23 is running on instance(s) hawk2
Service p24 is running on instance(s) hawk2
Service p25 is running on instance(s) hawk2

Puzzled that status for services is able to use delimited list where as relocation is not.

I have blogged about new features for service failover: 12.1 Improved Service Failover

Another test shows that it’s working as it should be.

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 ~]$ srvctl stop instance -d hawk -instance hawk1 -failover

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$


[root@racnode-dc1-1 ~]# crsctl stop crs
[root@racnode-dc1-1 ~]# crsctl start crs


[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$

[oracle@racnode-dc1-1 ~]$ srvctl start database -d hawk

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$

However, the requirement is to relocate services versus failover.

Here are scripts and demo for that.

Script will only work for 2-nodes RAC and service is running on 1 instance only.

[oracle@racnode-dc1-1 ~]$ srvctl config service -d hawk |egrep 'Service name|instances'
Service name: p11
Preferred instances: hawk1
Available instances: hawk2
Service name: p12
Preferred instances: hawk1
Available instances: hawk2
Service name: p13
Preferred instances: hawk1
Available instances: hawk2
Service name: p14
Preferred instances: hawk1
Available instances: hawk2
Service name: p21
Preferred instances: hawk2
Available instances: hawk1
Service name: p22
Preferred instances: hawk2
Available instances: hawk1
Service name: p23
Preferred instances: hawk2
Available instances: hawk1
Service name: p24
Preferred instances: hawk2
Available instances: hawk1
Service name: p25
Preferred instances: hawk2
Available instances: hawk1
[oracle@racnode-dc1-1 ~]$

DEMO:

[oracle@racnode-dc1-1 rac_relocate]$ ls *relocate*.sh
relocate_service.sh  validate_relocate_service.sh

[oracle@racnode-dc1-1 rac_relocate]$ ls *restore*.sh
restore_service_instance1.sh  restore_service_instance2.sh
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ SAVE SERVICES LOCATION AND PREVENT ACCIDENTAL OVERWRITE
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v > /tmp/service.org

[oracle@racnode-dc1-1 rac_relocate]$ chmod 400 /tmp/service.org; ll /tmp/service.org; cat /tmp/service.org
-r-------- 1 oracle oinstall 222 Jul 18 14:54 /tmp/service.org
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v > /tmp/service.org
-bash: /tmp/service.org: Permission denied
[oracle@racnode-dc1-1 rac_relocate]$

	
========================================================================
+++++++ RELOCATE SERVICES FROM INSTANCE 1 TO 2

Validate is similar to RMAN validate.
No relocation is performed and only syntax is provided for verification.
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ ./validate_relocate_service.sh
./validate_relocate_service.sh: line 4: 1: ---> USAGE: ./validate_relocate_service.sh -db_unique_name -oldinst# -newinst#

[oracle@racnode-dc1-1 rac_relocate]$ ./validate_relocate_service.sh hawk 1 2
+ OUTF=/tmp/service_1.conf
+ srvctl status instance -d hawk -instance hawk1 -v
+ ls -l /tmp/service_1.conf
-rw-r--r-- 1 oracle oinstall 109 Jul 18 14:59 /tmp/service_1.conf
+ cat /tmp/service_1.conf
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
+ set +x

**************************************
***** SERVICES THAT WILL BE RELOCATED:
**************************************
srvctl relocate service -d hawk -service p11 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p12 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p13 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p14 -oldinst hawk1 -newinst hawk2


[oracle@racnode-dc1-1 rac_relocate]$ ./relocate_service.sh hawk 1 2
-rw-r--r-- 1 oracle oinstall 109 Jul 18 15:00 /tmp/service_1.conf
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
+ srvctl relocate service -d hawk -service p11 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p12 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p13 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p14 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl status instance -d hawk -instance hawk1 -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
+ srvctl status instance -d hawk -instance hawk2 -v
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ set +x
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ RELOCATE SERVICES FROM INSTANCE 2 TO 1
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ ./relocate_service.sh hawk 2 1
-rw-r--r-- 1 oracle oinstall 129 Jul 18 15:02 /tmp/service_2.conf
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ srvctl relocate service -d hawk -service p11 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p12 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p13 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p14 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p21 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p22 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p23 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p24 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p25 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl status instance -d hawk -instance hawk2 -v
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.
+ srvctl status instance -d hawk -instance hawk1 -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ set +x
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ RESTORE SERVICES FOR INSTANCE
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ ./restore_service_instance2.sh
./restore_service_instance2.sh: line 4: 1: ---> USAGE: ./restore_service_instance2.sh -db_unique_name

[oracle@racnode-dc1-1 rac_relocate]$ ./restore_service_instance2.sh hawk
+ srvctl relocate service -d hawk -service p21 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p22 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p23 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p24 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p25 -oldinst hawk1 -newinst hawk2
+ set +x

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 rac_relocate]$

CODE:


========================================================================
+++++++ validate_relocate_service.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OLD=${2:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
NEW=${3:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
set -x
OUTF=/tmp/service_${OLD}.conf
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v > $OUTF
ls -l $OUTF;cat $OUTF
set +x
export svc=`tail -1 $OUTF | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
echo
echo "**************************************"
echo "***** SERVICES THAT WILL BE RELOCATED:"
echo "**************************************"
for s in ${svc}
do
echo "srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}${OLD} -newinst ${DB}${NEW}"
done
exit

========================================================================
+++++++ relocate_service.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OLD=${2:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
NEW=${3:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OUTF=/tmp/service_${OLD}.conf
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v > $OUTF
ls -l $OUTF;cat $OUTF
export svc=`tail -1 $OUTF | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}${OLD} -newinst ${DB}${NEW}
set +x
done
set -x
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v
srvctl status instance -d ${DB} -instance ${DB}${NEW} -v
set +x
exit

========================================================================
+++++++ restore_service_instance1.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name"}
export svc=`head -1 /tmp/service.org | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}2 -newinst ${DB}1
set +x
done
exit

========================================================================
+++++++ restore_service_instance2.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name"}
export svc=`tail -1 /tmp/service.org | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}1 -newinst ${DB}2
set +x
done
exit

November 23, 2017

CRS-2674: Start of dbfs_mount failed

Filed under: 12c,GoldenGate,oracle,RAC — mdinh @ 1:04 am

$ crsctl start resource dbfs_mount
CRS-2672: Attempting to start ‘dbfs_mount’ on ‘node2’
CRS-2672: Attempting to start ‘dbfs_mount’ on ‘node1’
CRS-2674: Start of ‘dbfs_mount’ on ‘node1’ failed
CRS-2679: Attempting to clean ‘dbfs_mount’ on ‘node1’
CRS-2674: Start of ‘dbfs_mount’ on ‘node2’ failed
CRS-2679: Attempting to clean ‘dbfs_mount’ on ‘node2’
CRS-2681: Clean of ‘dbfs_mount’ on ‘node1’ succeeded
CRS-2681: Clean of ‘dbfs_mount’ on ‘node2’ succeeded
CRS-4000: Command Start failed, or completed with errors.

Check to make sure DBFS_USER password is not expired.

November 5, 2017

Relocate Services Back To Instance Before Patching

Filed under: 12c,RAC — mdinh @ 1:16 pm

This will only work for 2-nodes RAC!

Prerequisite:
Patching starts at instance1, services failover to instance2.
Patching completed at instance1, restart instance1.
Patching starts at instance2, services failover to instance1.
Patching completed at instance2, restart instance2.
All services are now running at instance1.
Relocate services from instance2 back to where it belongs.

Save existing service configuration before patching.
[oracle@racnode-dc1-2 rac_relocate]$ ./save_service.sh

 

+ srvctl status database -d orclcdb -v
+ srvctl status database -d orclcdb -v
+ awk '-F ' '{print $2}'
+ cat /tmp/services.conf
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
+ cat /tmp/instance.conf
orclcdb1
orclcdb2
++ tail -1 /tmp/services.conf
++ awk '-F ' '{print $11}'
++ awk '{$0=substr($0,1,length($0)-1); print $0}'
+ svc=testsvc26,testsvc27,testsvc28,testsvc29
+ exit
[oracle@racnode-dc1-2 rac_relocate]$

 

Patching completed at instance1 and starting at instance2.
All services are running on instance1 after failover of instance2.

 

[oracle@racnode-dc1-2 rac_relocate]$ srvctl stop instance -db orclcdb -instance orclcdb2 -failover
[oracle@racnode-dc1-2 rac_relocate]$ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is not running on node racnode-dc1-2
[oracle@racnode-dc1-2 rac_relocate]$

 

Patching completed at instance2, start instance2, all services running from instance1.

[oracle@racnode-dc1-2 rac_relocate]$ srvctl start instance -db orclcdb -instance orclcdb2
[oracle@racnode-dc1-2 rac_relocate]$ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2. Instance status: Open.
[oracle@racnode-dc1-2 rac_relocate]$

Verify relocate services will work as intended by testing first – print but not execute commands.

[oracle@racnode-dc1-2 rac_relocate]$ ./test_relocate.sh
================================================================================
++++++ Saved Configuration
-rw-r--r-- 1 oracle oinstall  18 Nov  5 13:01 /tmp/instance.conf
-rw-r--r-- 1 oracle oinstall 291 Nov  5 13:01 /tmp/services.conf
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
orclcdb1
orclcdb2
================================================================================
++++++ Relocate Configuration
newinst=orclcdb2
oldinst=orclcdb1
svc=testsvc26,testsvc27,testsvc28,testsvc29
================================================================================
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2. Instance status: Open.
srvctl relocate service -db orclcdb -service testsvc26 -oldinst orclcdb1 -newinst orclcdb2
srvctl relocate service -db orclcdb -service testsvc27 -oldinst orclcdb1 -newinst orclcdb2
srvctl relocate service -db orclcdb -service testsvc28 -oldinst orclcdb1 -newinst orclcdb2
srvctl relocate service -db orclcdb -service testsvc29 -oldinst orclcdb1 -newinst orclcdb2
[oracle@racnode-dc1-2 rac_relocate]$

Relocate services to orginal saved configuration.

[oracle@racnode-dc1-2 rac_relocate]$ ./relocate_service.sh
================================================================================
++++++ Saved Configuration
-rw-r--r-- 1 oracle oinstall  18 Nov  5 13:01 /tmp/instance.conf
-rw-r--r-- 1 oracle oinstall 291 Nov  5 13:01 /tmp/services.conf
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
orclcdb1
orclcdb2
================================================================================
++++++ Relocate Configuration
newinst=orclcdb2
oldinst=orclcdb1
svc=testsvc26,testsvc27,testsvc28,testsvc29
================================================================================
+ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2. Instance status: Open.
+ IFS=,
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc26 -oldinst orclcdb1 -newinst orclcdb2
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc27 -oldinst orclcdb1 -newinst orclcdb2
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc28 -oldinst orclcdb1 -newinst orclcdb2
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc29 -oldinst orclcdb1 -newinst orclcdb2
+ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
+ exit
[oracle@racnode-dc1-2 rac_relocate]$

I have rant about hardcoding before.
YES! I hardcoded conf file location to provide a permanent and consistent location for all environments.

I don’t like to have to dig through code for find such information.
ex:
SCRIPT_DIR=/u01/app/oracle/scripts
LOG_DIR=$SCRIPT_DIR/log

save_service.sh


#!/bin/sh -x
srvctl status database -d ${db} -v > /tmp/services.conf
srvctl status database -d ${db} -v|awk -F" " '{print $2}' > /tmp/instance.conf
cat /tmp/services.conf
cat /tmp/instance.conf
svc=`tail -1 /tmp/services.conf | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
exit

 

test_relocate.sh


#!/bin/sh
echo "================================================================================"
echo "++++++ Saved Configuration"
ls -l /tmp/*.conf
cat /tmp/services.conf
cat /tmp/instance.conf
echo "================================================================================"
echo "++++++ Relocate Configuration"
export svc=`tail -1 /tmp/services.conf | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
export oldinst=`head -1 /tmp/instance.conf`
export newinst=`tail -1 /tmp/instance.conf`
env|egrep 'svc|inst'|sort
echo "================================================================================"
srvctl status database -d ${db} -v
IFS=","
for s in ${svc}
do
echo "srvctl relocate service -db ${db} -service ${s} -oldinst ${oldinst} -newinst ${newinst}"
done
exit

 

relocate_service.sh


#!/bin/sh
echo "================================================================================"
echo "++++++ Saved Configuration"
ls -l /tmp/*.conf
cat /tmp/services.conf
cat /tmp/instance.conf
echo "================================================================================"
echo "++++++ Relocate Configuration"
export svc=`tail -1 /tmp/services.conf | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
export oldinst=`head -1 /tmp/instance.conf`
export newinst=`tail -1 /tmp/instance.conf`
env|egrep 'svc|inst'|sort
echo "================================================================================"
set -x
srvctl status database -d ${db} -v
IFS=","
for s in ${svc}
do
srvctl relocate service -db ${db} -service ${s} -oldinst ${oldinst} -newinst ${newinst}
done
srvctl status database -d ${db} -v
exit

 

12.1 Improved Service Failover

Filed under: 12c,RAC — mdinh @ 12:51 pm

11gR2 Database Services and Instance Shutdown

The thought of having to manually relocate dozens of services was not very appealing.

As it turns out, there is no need to manually relocate services.

srvctl stop instance -db orclcdb -instance orclcdb1 -failover will do the trick.

Comparing the 2 commands, 12c is a lot clearer / cleaner.

12c:
srvctl add service -db orclcdb -service DBA_TEST -preferred orclcdb1 -available orclcdb2 -failovertype SELECT -tafpolicy BASIC

11g:
srvctl add service -d orclcdb -s DBA_TEST -P BASIC -e SELECT -r orclcdb1 -a orclcdb2

DEMO:

$ srvctl config service -d orclcdb -s DBA_TEST|egrep -i 'Service name|Preferred instances|Available instances|failover'

Service name: DBA_TEST
Failover type: SELECT
Failover method:
TAF failover retries:
TAF failover delay:
Preferred instances: orclcdb1
Available instances: orclcdb2

$ srvctl status database -d orclcdb

Instance orclcdb1 is running on node racnode-dc1-1
Instance orclcdb2 is running on node racnode-dc1-2

$ sqlplus mdinh/mdinh@dbatest @t.sql

SQL*Plus: Release 12.1.0.2.0 Production on Sun Nov 5 04:17:56 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sun Nov 05 2017 04:15:29 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options


   INST_ID STARTUP_TIME
---------- -----------------------------
         1 05-NOV-2017 04:12:55
         2 05-NOV-2017 04:14:49


   INST_ID FAILOVER_TYPE FAILOVER_M FAI
---------- ------------- ---------- ---
         1 NONE          NONE       NO
         1 SELECT        BASIC      NO
         2 NONE          NONE       NO

04:17:57 MDINH @ dbatest:>host
[oracle@racnode-dc1-1 ~]$ srvctl stop instance -db orclcdb -instance orclcdb1 -failover;date
Sun Nov 5 04:18:34 CET 2017
[oracle@racnode-dc1-1 ~]$ exit
exit

04:18:37 MDINH @ dbatest:>@t.sql

   INST_ID STARTUP_TIME
---------- -----------------------------
         2 05-NOV-2017 04:14:49


   INST_ID FAILOVER_TYPE FAILOVER_M FAI
---------- ------------- ---------- ---
         2 SELECT        BASIC      YES

04:18:40 MDINH @ dbatest:>

April 1, 2017

Decipher EM Error Message for RAC

Filed under: RAC — mdinh @ 3:24 am

I am not sure if there is a way to have EM display the actual commands it uses to check and alert errors.

It would be nice to be able to verify using the same syntax.

Examples of errors I was paged for:

Message=ora.net2.network has 1 instances in OFFLINE State
Key Value=resource_ora.network.type_ora.net2.network
Message=ora.host01_2.vip has 1 instances in OFFLINE State
Key Value=resource_ora.cluster_vip_net2.type_ora.host01_2.vip

Of course, crsctl stat res -t can be used and the result is 170 lines output.

I finally figured out to to simplify the output.

Find resource type:

crsctl stat res|grep -i type|sort -u

TYPE=app.appvipx.type
TYPE=local_resource
TYPE=ora.asm.type
TYPE=ora.cluster_vip_net1.type
TYPE=ora.cluster_vip_net2.type
TYPE=ora.cvu.type
TYPE=ora.database.type
TYPE=ora.diskgroup.type
TYPE=ora.listener.type
TYPE=ora.mgmtdb.type
TYPE=ora.mgmtlsnr.type
TYPE=ora.network.type
TYPE=ora.oc4j.type
TYPE=ora.ons.type
TYPE=ora.scan_listener.type
TYPE=ora.scan_vip.type
TYPE=ora.service.type
TYPE=xag.goldengate.type

Check state for resource type:

crsctl stat res -w "TYPE = ora.network.type"

NAME=ora.net1.network
TYPE=ora.network.type
TARGET=ONLINE               , ONLINE
STATE=ONLINE on host01, ONLINE on host02

NAME=ora.net2.network
TYPE=ora.network.type
TARGET=ONLINE               , ONLINE
STATE=ONLINE on host01, ONLINE on host02

crsctl stat res -w "TYPE = ora.cluster_vip_net1.type"

NAME=ora.host01.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on host01

NAME=ora.host02.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on host02

crsctl stat res -w "TYPE = ora.cluster_vip_net2.type"

NAME=ora.host01_2.vip
TYPE=ora.cluster_vip_net2.type
TARGET=ONLINE
STATE=ONLINE on host01

NAME=ora.host02_2.vip
TYPE=ora.cluster_vip_net2.type
TARGET=ONLINE
STATE=ONLINE on host02

March 26, 2017

racattack-ansible-oracle Up and Running

Filed under: RAC,Vagrant,VirtualBox — mdinh @ 2:04 pm

From a time long ago – https://mdinh.wordpress.com/2016/12/04/toys-for-when-you-i-are-bored/

With help from oravirt, I was able to install RAC VMs.

At this point, only the VM servers have been created and GI/DB are not installed; that’s coming up at some point.

Some clarification for setup=standard vagrant provision

setup=standard (shell environment variable)

vagrant provision (executable)

This is where the confusion was at first.

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ setup=standard vagrant provision

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51

======================================================================

E:\racattack-ansible-oracle>setup=standard vagrant provision
'setup' is not recognized as an internal or external command,
operable program or batch file.

E:\racattack-ansible-oracle>

Follow https://github.com/racattack/racattack-ansible-oracle

There were some errors but seems to be working fine.

Note: I used Git Bash this time around vs Window CMD.

One improvements I would make if I ever or whenever get good enough on the subject is to have shared folders for linuxamd64_12102*.zip use existing locations.

The way most vagrants are configure, will need multiple copies of the same binaries.

Alternatively, edit VM shared folder manually.

falcon@falconidae MINGW64 /e
$ git clone --recursive https://github.com/racattack/racattack-ansible-oracle
Cloning into 'racattack-ansible-oracle'...
remote: Counting objects: 320, done.
Receiving objects:  79%remote: Total 320 (delta 0), reused 0 (delta 0), pack-reused 320
Receiving objects: 100% (320/320), 52.22 KiB | 0 bytes/s, done.
Resolving deltas: 100% (210/210), done.
Submodule 'stagefiles/ansible-oracle' (https://github.com/oravirt/ansible-oracle) registered for path 'stagefiles/ansible-oracle'
Cloning into 'E:/racattack-ansible-oracle/stagefiles/ansible-oracle'...
remote: Counting objects: 2061, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 2061 (delta 1), reused 0 (delta 0), pack-reused 2052
Receiving objects: 100% (2061/2061), 517.76 KiB | 0 bytes/s, done.
Resolving deltas: 100% (954/954), done.
Submodule path 'stagefiles/ansible-oracle': checked out '00651e0caf9a876fcefe51d21e44a6e78c313e76'

======================================================================

falcon@falconidae MINGW64 /e
$ cd racattack-ansible-oracle

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ ls -l
total 20
drwxr-xr-x 1 falcon 197121    0 Mar 26 05:45 12cR1/
-rw-r--r-- 1 falcon 197121 3863 Mar 26 05:45 README.md
drwxr-xr-x 1 falcon 197121    0 Mar 26 05:45 stagefiles/
-rw-r--r-- 1 falcon 197121 9706 Mar 26 05:45 Vagrantfile

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vi Vagrantfile

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ ls -l 12cR1/*.zip
-rw-r--r-- 1 falcon 197121 1673544724 Mar 25 13:20 12cR1/linuxamd64_12102_database_1of2.zip
-rw-r--r-- 1 falcon 197121 1014530602 Mar 25 13:32 12cR1/linuxamd64_12102_database_2of2.zip
-rw-r--r-- 1 falcon 197121 1747043545 Mar 25 13:44 12cR1/linuxamd64_12102_grid_1of2.zip
-rw-r--r-- 1 falcon 197121  646972897 Mar 25 13:42 12cR1/linuxamd64_12102_grid_2of2.zip

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant status

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave
on first boot shared disks will be created, this will take some time

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Current machine states:

collabn2                  not created (virtualbox)
collabn1                  not created (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant up

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave
on first boot shared disks will be created, this will take some time

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Bringing machine 'collabn2' up with 'virtualbox' provider...
Bringing machine 'collabn1' up with 'virtualbox' provider...
==> collabn2: Box 'kikitux/oracle6-racattack' could not be found. Attempting to find and install...
    collabn2: Box Provider: virtualbox
    collabn2: Box Version: >= 0
==> collabn2: Loading metadata for box 'kikitux/oracle6-racattack'
    collabn2: URL: https://atlas.hashicorp.com/kikitux/oracle6-racattack
==> collabn2: Adding box 'kikitux/oracle6-racattack' (v16.01.01) for provider: virtualbox
    collabn2: Downloading: https://atlas.hashicorp.com/kikitux/boxes/oracle6-racattack/versions/16.01.01/providers/virtualbox.box
    collabn2:
==> collabn2: Successfully added box 'kikitux/oracle6-racattack' (v16.01.01) for 'virtualbox'!
==> collabn2: Importing base box 'kikitux/oracle6-racattack'...
==> collabn2: Matching MAC address for NAT networking...
==> collabn2: Checking if box 'kikitux/oracle6-racattack' is up to date...
==> collabn2: Setting the name of the VM: collabn2.1703260556
==> collabn2: Fixed port collision for 22 => 2222. Now on port 2200.
==> collabn2: Clearing any previously set network interfaces...
==> collabn2: Preparing network interfaces based on configuration...
    collabn2: Adapter 1: nat
    collabn2: Adapter 2: hostonly
    collabn2: Adapter 3: hostonly
==> collabn2: Forwarding ports...
    collabn2: 22 (guest) => 2200 (host) (adapter 1)
==> collabn2: Running 'pre-boot' VM customizations...
==> collabn2: Booting VM...
==> collabn2: Waiting for machine to boot. This may take a few minutes...
    collabn2: SSH address: 127.0.0.1:2200
    collabn2: SSH username: vagrant
    collabn2: SSH auth method: private key
    collabn2: Warning: Remote connection disconnect. Retrying...
==> collabn2: Machine booted and ready!
[collabn2] GuestAdditions versions on your host (5.1.18) and guest (5.0.0) do not match.
Loaded plugins: security

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Setting up Install Process
http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64 error was
14: PYCURL ERROR 6 - "Couldn't resolve host 'mirrors.fedoraproject.org'"
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Package kernel-uek-devel-2.6.39-400.250.9.el6uek.x86_64 already installed and latest version
Package gcc-4.4.7-16.el6.x86_64 already installed and latest version
Package 1:make-3.81-20.el6.x86_64 already installed and latest version
Package 4:perl-5.10.1-141.el6.x86_64 already installed and latest version
Package bzip2-1.0.5-7.el6_0.x86_64 already installed and latest version
Nothing to do

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Copy iso file D:\Program Files\Oracle\VirtualBox\VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 5.1.18 - guest version is 5.0.0
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.1.18 Guest Additions for Linux...........
VirtualBox Guest Additions installer
Removing installed version 5.0.0 of VirtualBox Guest Additions...
Stopping VirtualBox Additions [FAILED]
(Cannot unload module vboxguest)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Removing existing VirtualBox non-DKMS kernel modules[  OK  ]
[  OK  ] VirtualBox Guest Addition service [  OK  ]
Copying additional installer modules ...
Installing additional modules ...
vboxadd.sh: Building Guest Additions kernel modules.
vboxadd.sh: You should restart your guest to make sure the new modules are actually used.
vboxadd.sh: Starting the VirtualBox Guest Additions.

Could not find the X.Org or XFree86 Window System, skipping.


Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
==> collabn2: Checking for guest additions in VM...
    collabn2: The guest additions on this VM do not match the installed version of
    collabn2: VirtualBox! In most cases this is fine, but in rare cases it can
    collabn2: prevent things such as shared folders from working properly. If you see
    collabn2: shared folder errors, please make sure the guest additions within the
    collabn2: virtual machine match the version of VirtualBox you have installed on
    collabn2: your host and reload your VM.
    collabn2:
    collabn2: Guest Additions Version: 5.0.0
    collabn2: VirtualBox Version: 5.1
==> collabn2: Setting hostname...
==> collabn2: Configuring and enabling network interfaces...
==> collabn2: Mounting shared folders...
    collabn2: /vagrant => E:/racattack-ansible-oracle
    collabn2: /media/sf_12cR1 => E:/racattack-ansible-oracle/12cR1
==> collabn2: Detected mount owner ID within mount options. (uid: 54320 guestpath: /media/sf_12cR1)
==> collabn2: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/sf_12cR1)
    collabn2: /media/stagefiles => E:/racattack-ansible-oracle/stagefiles
==> collabn2: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/stagefiles)
==> collabn2: Running provisioner: shell...
    collabn2: Running: inline script
==> collabn2: overwriting /etc/resolv.conf
==> collabn2: Running provisioner: shell...
    collabn2: Running: inline script
==> collabn2: Stopping named:
==> collabn2: [  OK  ]
==> collabn2: wrote key file "/etc/rndc.key"
==> collabn2: Stopping named:
==> collabn2: [  OK  ]
==> collabn2: Starting named:
==> collabn2: [  OK  ]
==> collabn2: successfully completed named steps
==> collabn1: Box 'kikitux/oracle6-racattack' could not be found. Attempting to find and install...
    collabn1: Box Provider: virtualbox
    collabn1: Box Version: >= 0
==> collabn1: Loading metadata for box 'kikitux/oracle6-racattack'
    collabn1: URL: https://atlas.hashicorp.com/kikitux/oracle6-racattack
==> collabn1: Adding box 'kikitux/oracle6-racattack' (v16.01.01) for provider: virtualbox
==> collabn1: Importing base box 'kikitux/oracle6-racattack'...
==> collabn1: Matching MAC address for NAT networking...
==> collabn1: Checking if box 'kikitux/oracle6-racattack' is up to date...
==> collabn1: Setting the name of the VM: collabn1.1703260604
==> collabn1: Fixed port collision for 22 => 2222. Now on port 2201.
==> collabn1: Clearing any previously set network interfaces...
==> collabn1: Preparing network interfaces based on configuration...
    collabn1: Adapter 1: nat
    collabn1: Adapter 2: hostonly
    collabn1: Adapter 3: hostonly
==> collabn1: Forwarding ports...
    collabn1: 22 (guest) => 2201 (host) (adapter 1)
==> collabn1: Running 'pre-boot' VM customizations...
==> collabn1: Booting VM...
==> collabn1: Waiting for machine to boot. This may take a few minutes...
    collabn1: SSH address: 127.0.0.1:2201
    collabn1: SSH username: vagrant
    collabn1: SSH auth method: private key
    collabn1: Warning: Remote connection disconnect. Retrying...
==> collabn1: Machine booted and ready!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[collabn1] GuestAdditions versions on your host (5.1.18) and guest (5.0.0) do not match.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Loaded plugins: security
Setting up Install Process
http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64 error was
14: PYCURL ERROR 6 - "Couldn't resolve host 'mirrors.fedoraproject.org'"
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Package kernel-uek-devel-2.6.39-400.250.9.el6uek.x86_64 already installed and latest version
Package gcc-4.4.7-16.el6.x86_64 already installed and latest version
Package 1:make-3.81-20.el6.x86_64 already installed and latest version
Package 4:perl-5.10.1-141.el6.x86_64 already installed and latest version
Package bzip2-1.0.5-7.el6_0.x86_64 already installed and latest version
Nothing to do
Copy iso file D:\Program Files\Oracle\VirtualBox\VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 5.1.18 - guest version is 5.0.0
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.1.18 Guest Additions for Linux...........
VirtualBox Guest Additions installer
Removing installed version 5.0.0 of VirtualBox Guest Additions...
Stopping VirtualBox Additions [FAILED]
(Cannot unload module vboxguest)
Removing existing VirtualBox non-DKMS kernel modules[  OK  ]
[  OK  ] VirtualBox Guest Addition service [  OK  ]
Copying additional installer modules ...
Installing additional modules ...
vboxadd.sh: Building Guest Additions kernel modules.
vboxadd.sh: You should restart your guest to make sure the new modules are actually used.
vboxadd.sh: Starting the VirtualBox Guest Additions.

Could not find the X.Org or XFree86 Window System, skipping.


Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
==> collabn1: Checking for guest additions in VM...
    collabn1: The guest additions on this VM do not match the installed version of
    collabn1: VirtualBox! In most cases this is fine, but in rare cases it can
    collabn1: prevent things such as shared folders from working properly. If you see
    collabn1: shared folder errors, please make sure the guest additions within the
    collabn1: virtual machine match the version of VirtualBox you have installed on
    collabn1: your host and reload your VM.
    collabn1:
    collabn1: Guest Additions Version: 5.0.0
    collabn1: VirtualBox Version: 5.1
==> collabn1: Setting hostname...
==> collabn1: Configuring and enabling network interfaces...
==> collabn1: Mounting shared folders...
    collabn1: /vagrant => E:/racattack-ansible-oracle
    collabn1: /media/sf_12cR1 => E:/racattack-ansible-oracle/12cR1
==> collabn1: Detected mount owner ID within mount options. (uid: 54320 guestpath: /media/sf_12cR1)
==> collabn1: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/sf_12cR1)
    collabn1: /media/stagefiles => E:/racattack-ansible-oracle/stagefiles
==> collabn1: Detected mount owner ID within mount options. (uid: 1000 guestpath: /media/stagefiles)
==> collabn1: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/stagefiles)
==> collabn1: Running provisioner: shell...
    collabn1: Running: inline script
==> collabn1: overwriting /etc/resolv.conf
==> collabn1: Running provisioner: shell...
    collabn1: Running: inline script
==> collabn1: wrote key file "/etc/rndc.key"
==> collabn1: Stopping named:
==> collabn1: [  OK  ]
==> collabn1: Starting named:
==> collabn1: [  OK  ]
==> collabn1: successfully completed named steps

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant status

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Current machine states:

collabn2                  running (virtualbox)
collabn1                  running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vboxmanage list runningvms
"hashicorp_default_1490531708969_67077" {ab780940-aeef-4e4c-a868-6b5c6f81af2b}
"collabn2.1703260556" {71023f40-8635-4664-8c6e-730a1bfbe0e1}
"collabn1.1703260604" {d20095ef-e5ed-4554-96e0-0168125b3dd8}

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant ssh collabn1

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Last login: Sun Mar 26 13:20:43 2017 from 10.0.2.2
[vagrant@collabn1 ~]$ ls -al
total 32
drwx------  3 vagrant vagrant 4096 Mar 26 13:10 .
drwxr-xr-x. 5 root    root    4096 Aug  4  2015 ..
-rw-------  1 vagrant vagrant  139 Mar 26 13:22 .bash_history
-rw-r--r--  1 vagrant vagrant   18 May  7  2015 .bash_logout
-rw-r--r--  1 vagrant vagrant  176 May  7  2015 .bash_profile
-rw-r--r--  1 vagrant vagrant  124 May  7  2015 .bashrc
-rw-r--r--  1 vagrant vagrant  121 Dec 20  2012 .kshrc
drwx------  2 vagrant vagrant 4096 Aug  4  2015 .ssh

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Don't know password and not same as username.
[vagrant@collabn1 ~]$ su - oracle
Password:
su: incorrect password
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

[vagrant@collabn1 ~]$ sudo su - oracle
[oracle@collabn1 ~]$ exit
logout
[vagrant@collabn1 ~]$ sudo su -
[root@collabn1 ~]# cat /etc/passwd | column -t -s :
root       x  0      0      root                          /root                /bin/bash
bin        x  1      1      bin                           /bin                 /sbin/nologin
daemon     x  2      2      daemon                        /sbin                /sbin/nologin
adm        x  3      4      adm                           /var/adm             /sbin/nologin
lp         x  4      7      lp                            /var/spool/lpd       /sbin/nologin
sync       x  5      0      sync                          /sbin                /bin/sync
shutdown   x  6      0      shutdown                      /sbin                /sbin/shutdown
halt       x  7      0      halt                          /sbin                /sbin/halt
mail       x  8      12     mail                          /var/spool/mail      /sbin/nologin
uucp       x  10     14     uucp                          /var/spool/uucp      /sbin/nologin
operator   x  11     0      operator                      /root                /sbin/nologin
games      x  12     100    games                         /usr/games           /sbin/nologin
gopher     x  13     30     gopher                        /var/gopher          /sbin/nologin
ftp        x  14     50     FTP User                      /var/ftp             /sbin/nologin
nobody     x  99     99     Nobody                        /                    /sbin/nologin
vcsa       x  69     69     virtual console memory owner  /dev                 /sbin/nologin
rpc        x  32     32     Rpcbind Daemon                /var/cache/rpcbind   /sbin/nologin
rpcuser    x  29     29     RPC Service User              /var/lib/nfs         /sbin/nologin
nfsnobody  x  65534  65534  Anonymous NFS User            /var/lib/nfs         /sbin/nologin
saslauth   x  499    76     "Saslauthd user"              /var/empty/saslauth  /sbin/nologin
postfix    x  89     89     /var/spool/postfix            /sbin/nologin
sshd       x  74     74     Privilege-separated SSH       /var/empty/sshd      /sbin/nologin
named      x  25     25     Named                         /var/named           /sbin/nologin
dbus       x  81     81     System message bus            /                    /sbin/nologin
oracle     x  54321  54321  /home/oracle                  /bin/bash
applmgr    x  54322  54321  /home/applmgr                 /bin/bash
puppet     x  52     52     Puppet                        /var/lib/puppet      /sbin/nologin
vboxadd    x  498    1      /var/run/vboxadd              /bin/false
vagrant    x  1000   1000   /home/vagrant                 /bin/bash
[root@collabn1 ~]# df -h
Filesystem        Size  Used Avail Use% Mounted on
/dev/sda3          27G  2.6G   23G  11% /
tmpfs             1.5G     0  1.5G   0% /dev/shm
/dev/sda1         485M   93M  367M  21% /boot
/dev/sdb1          50G  180M   48G   1% /u01
vagrant           466G  370G   97G  80% /vagrant
media_sf_12cR1    466G  370G   97G  80% /media/sf_12cR1
media_stagefiles  466G  370G   97G  80% /media/stagefiles
[root@collabn1 ~]# cd /media/sf_12cR1/
[root@collabn1 sf_12cR1]# ls -l
total 4962982
-rwxrwxrwx 1 54320 oinstall          0 Mar 26 12:45 keep
-rwxrwxrwx 1 54320 oinstall 1673544724 Mar 25 20:20 linuxamd64_12102_database_1of2.zip
-rwxrwxrwx 1 54320 oinstall 1014530602 Mar 25 20:32 linuxamd64_12102_database_2of2.zip
-rwxrwxrwx 1 54320 oinstall 1747043545 Mar 25 20:44 linuxamd64_12102_grid_1of2.zip
-rwxrwxrwx 1 54320 oinstall  646972897 Mar 25 20:42 linuxamd64_12102_grid_2of2.zip
-rwxrwxrwx 1 54320 oinstall        181 Mar 26 12:45 readme.txt
[root@collabn1 sf_12cR1]#

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant ssh collabn2

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave
Last login: Sun Mar 26 13:10:38 2017 from 10.0.2.2
[vagrant@collabn2 ~]$ ls -al
total 32
drwx------  3 vagrant vagrant 4096 Mar 26 13:14 .
drwxr-xr-x. 5 root    root    4096 Aug  4  2015 ..
-rw-------  1 vagrant vagrant   56 Mar 26 13:14 .bash_history
-rw-r--r--  1 vagrant vagrant   18 May  7  2015 .bash_logout
-rw-r--r--  1 vagrant vagrant  176 May  7  2015 .bash_profile
-rw-r--r--  1 vagrant vagrant  124 May  7  2015 .bashrc
-rw-r--r--  1 vagrant vagrant  121 Dec 20  2012 .kshrc
drwx------  2 vagrant vagrant 4096 Aug  4  2015 .ssh
[vagrant@collabn2 ~]$ sudo su -
[root@collabn2 ~]# cat /etc/passwd | column -t -s :
root       x  0      0      root                          /root                /bin/bash
bin        x  1      1      bin                           /bin                 /sbin/nologin
daemon     x  2      2      daemon                        /sbin                /sbin/nologin
adm        x  3      4      adm                           /var/adm             /sbin/nologin
lp         x  4      7      lp                            /var/spool/lpd       /sbin/nologin
sync       x  5      0      sync                          /sbin                /bin/sync
shutdown   x  6      0      shutdown                      /sbin                /sbin/shutdown
halt       x  7      0      halt                          /sbin                /sbin/halt
mail       x  8      12     mail                          /var/spool/mail      /sbin/nologin
uucp       x  10     14     uucp                          /var/spool/uucp      /sbin/nologin
operator   x  11     0      operator                      /root                /sbin/nologin
games      x  12     100    games                         /usr/games           /sbin/nologin
gopher     x  13     30     gopher                        /var/gopher          /sbin/nologin
ftp        x  14     50     FTP User                      /var/ftp             /sbin/nologin
nobody     x  99     99     Nobody                        /                    /sbin/nologin
vcsa       x  69     69     virtual console memory owner  /dev                 /sbin/nologin
rpc        x  32     32     Rpcbind Daemon                /var/cache/rpcbind   /sbin/nologin
rpcuser    x  29     29     RPC Service User              /var/lib/nfs         /sbin/nologin
nfsnobody  x  65534  65534  Anonymous NFS User            /var/lib/nfs         /sbin/nologin
saslauth   x  499    76     "Saslauthd user"              /var/empty/saslauth  /sbin/nologin
postfix    x  89     89     /var/spool/postfix            /sbin/nologin
sshd       x  74     74     Privilege-separated SSH       /var/empty/sshd      /sbin/nologin
named      x  25     25     Named                         /var/named           /sbin/nologin
dbus       x  81     81     System message bus            /                    /sbin/nologin
oracle     x  54321  54321  /home/oracle                  /bin/bash
applmgr    x  54322  54321  /home/applmgr                 /bin/bash
puppet     x  52     52     Puppet                        /var/lib/puppet      /sbin/nologin
vboxadd    x  498    1      /var/run/vboxadd              /bin/false
vagrant    x  1000   1000   /home/vagrant                 /bin/bash
[root@collabn2 ~]# df -h
Filesystem        Size  Used Avail Use% Mounted on
/dev/sda3          27G  2.6G   23G  11% /
tmpfs             1.5G     0  1.5G   0% /dev/shm
/dev/sda1         485M   93M  367M  21% /boot
/dev/sdb1          50G  180M   48G   1% /u01
vagrant           466G  370G   97G  80% /vagrant
media_sf_12cR1    466G  370G   97G  80% /media/sf_12cR1
media_stagefiles  466G  370G   97G  80% /media/stagefiles
[root@collabn2 ~]# cd /media/sf_12cR1/
[root@collabn2 sf_12cR1]# ls -l
total 4962982
-rwxrwxrwx 1 54320 oinstall          0 Mar 26 12:45 keep
-rwxrwxrwx 1 54320 oinstall 1673544724 Mar 25 20:20 linuxamd64_12102_database_1of2.zip
-rwxrwxrwx 1 54320 oinstall 1014530602 Mar 25 20:32 linuxamd64_12102_database_2of2.zip
-rwxrwxrwx 1 54320 oinstall 1747043545 Mar 25 20:44 linuxamd64_12102_grid_1of2.zip
-rwxrwxrwx 1 54320 oinstall  646972897 Mar 25 20:42 linuxamd64_12102_grid_2of2.zip
-rwxrwxrwx 1 54320 oinstall        181 Mar 26 12:45 readme.txt
[root@collabn2 sf_12cR1]#

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant ssh-config

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Host collabn2
  HostName 127.0.0.1
  User vagrant
  Port 2200
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile C:/Users/falcon/.vagrant.d/insecure_private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host collabn1
  HostName 127.0.0.1
  User vagrant
  Port 2201
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile C:/Users/falcon/.vagrant.d/insecure_private_key
  IdentitiesOnly yes
  LogLevel FATAL


falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$

Using username "vagrant".
Authenticating with public key "imported-openssh-key"
Last login: Sun Mar 26 14:00:36 2017 from 10.0.2.2
[vagrant@collabn1 ~]$

Using username "vagrant".
Authenticating with public key "imported-openssh-key"
Last login: Sun Mar 26 14:01:14 2017 from 10.0.2.2
[vagrant@collabn2 ~]$

February 9, 2017

Steps to Recreate Central Inventory in Real Applications Clusters (Doc ID 413939.1)

Filed under: 12c,RAC — mdinh @ 3:13 am

$ echo $ORACLE_HOME

/u01/app/oracle/product/12.1.0/db_1

$ $ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME

Oracle Interim Patch Installer version 12.1.0.1.3
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/db_1/oraInst.loc
OPatch version    : 12.1.0.1.3
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/opatch/opatch2017-02-08_15-56-03PM_1.log

List of Homes on this system:

Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
   Oracle Home dir. path does not exist in Central Inventory
   Oracle Home is a symbolic link
   Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73

This happened due to error during install. – oraInventory mismatch.

$ cat /etc/oraInst.loc
inst_group=oinstall
inventory_loc=/u01/app/oraInventory

$ cd /u01/software/database
$ export DISTRIB=`pwd`
$ ./runInstaller -silent -showProgress -waitforcompletion -force -ignorePrereq -responseFile $DISTRIB/response/db_install.rsp \
> oracle.install.option=INSTALL_DB_SWONLY \
> UNIX_GROUP_NAME=oinstall \
> INVENTORY_LOCATION=/u01/app/oracle/oraInventory \

Backup oraInventory for both nodes and attachHome

$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -silent -ignoreSysPrereqs -attachHome \
ORACLE_HOME="/u02/app/12.1.0/grid" ORACLE_HOME_NAME="OraGI12Home1" \
LOCAL_NODE="node01" CLUSTER_NODES="{node01,node02}" CRS=true
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

$ ./runInstaller -silent -ignoreSysPrereqs -attachHome \
ORACLE_HOME="/u01/app/oracle/product/12.1.0/db_1" ORACLE_HOME_NAME="OraDB12Home1" \
LOCAL_NODE="node01" CLUSTER_NODES="{node01,node02}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

February 8, 2017

runcluvfy.sh -pre crsinst NTP failed PRVF-07590 PRVG-01017

Filed under: 12c,RAC — mdinh @ 12:56 pm

12c (12.1.0.2.0) RAC Oracle Linux Server release 7.3
/u01/software/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose

Starting Clock synchronization checks using Network Time Protocol(NTP)...

Checking existence of NTP configuration file "/etc/ntp.conf" across nodes
  Node Name                             File exists?            
  ------------------------------------  ------------------------
  node02                                yes                     
  node01                                yes                     
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?                
  ------------------------------------  ------------------------
  node02                                no                      
  node01                                yes                     
PRVF-7590 : "ntpd" is not running on node "node02"
PRVG-1017 : NTP configuration file is present on nodes "node02" on which NTP daemon or service was not running
Result: Clock synchronization check using Network Time Protocol(NTP) failed

NTP was indeed running on both nodes.
The issue is /var/run/ntpd.pid does not exist on the failed node.
NTP was started with incorrect options.

GOOD:

# cat /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# systemctl status ntpd.service
ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-02-03 20:37:18 CST; 3 days ago
 Main PID: 22517 (ntpd)
   CGroup: /system.slice/ntpd.service
           /usr/sbin/ntpd -u ntp:ntp -x -u ntp:ntp -p /var/run/ntpd.pid

# ll /var/run/ntpd.*
-rw-r--r-- 1 root root 5 Feb  3 20:37 /var/run/ntpd.pid

BAD:

# cat /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# systemctl status ntpd.service
ntpd.service - Network Time Service           
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-02-03 18:10:23 CST; 3 days ago
 Main PID: 22403 (ntpd)
   CGroup: /system.slice/ntpd.service
           /usr/sbin/ntpd -u ntp:ntp -g           

# ll /var/run/ntpd.*
ls: cannot access /var/run/ntpd.*: No such file or directory

SOLUTION:

Restart ntpd on failed node.
Next Page »

Create a free website or blog at WordPress.com.