Thinking Out Loud

November 23, 2017

CRS-2674: Start of dbfs_mount failed

Filed under: 12c,GoldenGate,oracle,RAC — mdinh @ 1:04 am

$ crsctl start resource dbfs_mount
CRS-2672: Attempting to start ‘dbfs_mount’ on ‘node2’
CRS-2672: Attempting to start ‘dbfs_mount’ on ‘node1’
CRS-2674: Start of ‘dbfs_mount’ on ‘node1’ failed
CRS-2679: Attempting to clean ‘dbfs_mount’ on ‘node1’
CRS-2674: Start of ‘dbfs_mount’ on ‘node2’ failed
CRS-2679: Attempting to clean ‘dbfs_mount’ on ‘node2’
CRS-2681: Clean of ‘dbfs_mount’ on ‘node1’ succeeded
CRS-2681: Clean of ‘dbfs_mount’ on ‘node2’ succeeded
CRS-4000: Command Start failed, or completed with errors.

Check to make sure DBFS_USER password is not expired.

Advertisements

November 5, 2017

Relocate Services Back To Instance Before Patching

Filed under: 12c,RAC — mdinh @ 1:16 pm

This will only work for 2-nodes RAC!

Prerequisite:
Patching starts at instance1, services failover to instance2.
Patching completed at instance1, restart instance1.
Patching starts at instance2, services failover to instance1.
Patching completed at instance2, restart instance2.
All services are now running at instance1.
Relocate services from instance2 back to where it belongs.

Save existing service configuration before patching.
[oracle@racnode-dc1-2 rac_relocate]$ ./save_service.sh

 

+ srvctl status database -d orclcdb -v
+ srvctl status database -d orclcdb -v
+ awk '-F ' '{print $2}'
+ cat /tmp/services.conf
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
+ cat /tmp/instance.conf
orclcdb1
orclcdb2
++ tail -1 /tmp/services.conf
++ awk '-F ' '{print $11}'
++ awk '{$0=substr($0,1,length($0)-1); print $0}'
+ svc=testsvc26,testsvc27,testsvc28,testsvc29
+ exit
[oracle@racnode-dc1-2 rac_relocate]$

 

Patching completed at instance1 and starting at instance2.
All services are running on instance1 after failover of instance2.

 

[oracle@racnode-dc1-2 rac_relocate]$ srvctl stop instance -db orclcdb -instance orclcdb2 -failover
[oracle@racnode-dc1-2 rac_relocate]$ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is not running on node racnode-dc1-2
[oracle@racnode-dc1-2 rac_relocate]$

 

Patching completed at instance2, start instance2, all services running from instance1.

[oracle@racnode-dc1-2 rac_relocate]$ srvctl start instance -db orclcdb -instance orclcdb2
[oracle@racnode-dc1-2 rac_relocate]$ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2. Instance status: Open.
[oracle@racnode-dc1-2 rac_relocate]$

Verify relocate services will work as intended by testing first – print but not execute commands.

[oracle@racnode-dc1-2 rac_relocate]$ ./test_relocate.sh
================================================================================
++++++ Saved Configuration
-rw-r--r-- 1 oracle oinstall  18 Nov  5 13:01 /tmp/instance.conf
-rw-r--r-- 1 oracle oinstall 291 Nov  5 13:01 /tmp/services.conf
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
orclcdb1
orclcdb2
================================================================================
++++++ Relocate Configuration
newinst=orclcdb2
oldinst=orclcdb1
svc=testsvc26,testsvc27,testsvc28,testsvc29
================================================================================
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2. Instance status: Open.
srvctl relocate service -db orclcdb -service testsvc26 -oldinst orclcdb1 -newinst orclcdb2
srvctl relocate service -db orclcdb -service testsvc27 -oldinst orclcdb1 -newinst orclcdb2
srvctl relocate service -db orclcdb -service testsvc28 -oldinst orclcdb1 -newinst orclcdb2
srvctl relocate service -db orclcdb -service testsvc29 -oldinst orclcdb1 -newinst orclcdb2
[oracle@racnode-dc1-2 rac_relocate]$

Relocate services to orginal saved configuration.

[oracle@racnode-dc1-2 rac_relocate]$ ./relocate_service.sh
================================================================================
++++++ Saved Configuration
-rw-r--r-- 1 oracle oinstall  18 Nov  5 13:01 /tmp/instance.conf
-rw-r--r-- 1 oracle oinstall 291 Nov  5 13:01 /tmp/services.conf
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
orclcdb1
orclcdb2
================================================================================
++++++ Relocate Configuration
newinst=orclcdb2
oldinst=orclcdb1
svc=testsvc26,testsvc27,testsvc28,testsvc29
================================================================================
+ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15,testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2. Instance status: Open.
+ IFS=,
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc26 -oldinst orclcdb1 -newinst orclcdb2
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc27 -oldinst orclcdb1 -newinst orclcdb2
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc28 -oldinst orclcdb1 -newinst orclcdb2
+ for s in '${svc}'
+ srvctl relocate service -db orclcdb -service testsvc29 -oldinst orclcdb1 -newinst orclcdb2
+ srvctl status database -d orclcdb -v
Instance orclcdb1 is running on node racnode-dc1-1 with online services DBA_TEST,testsvc11,testsvc12,testsvc13,testsvc14,testsvc15. Instance status: Open.
Instance orclcdb2 is running on node racnode-dc1-2 with online services testsvc26,testsvc27,testsvc28,testsvc29. Instance status: Open.
+ exit
[oracle@racnode-dc1-2 rac_relocate]$

I have rant about hardcoding before.
YES! I hardcoded conf file location to provide a permanent and consistent location for all environments.

I don’t like to have to dig through code for find such information.
ex:
SCRIPT_DIR=/u01/app/oracle/scripts
LOG_DIR=$SCRIPT_DIR/log

save_service.sh


#!/bin/sh -x
srvctl status database -d ${db} -v > /tmp/services.conf
srvctl status database -d ${db} -v|awk -F" " '{print $2}' > /tmp/instance.conf
cat /tmp/services.conf
cat /tmp/instance.conf
svc=`tail -1 /tmp/services.conf | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
exit

 

test_relocate.sh


#!/bin/sh
echo "================================================================================"
echo "++++++ Saved Configuration"
ls -l /tmp/*.conf
cat /tmp/services.conf
cat /tmp/instance.conf
echo "================================================================================"
echo "++++++ Relocate Configuration"
export svc=`tail -1 /tmp/services.conf | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
export oldinst=`head -1 /tmp/instance.conf`
export newinst=`tail -1 /tmp/instance.conf`
env|egrep 'svc|inst'|sort
echo "================================================================================"
srvctl status database -d ${db} -v
IFS=","
for s in ${svc}
do
echo "srvctl relocate service -db ${db} -service ${s} -oldinst ${oldinst} -newinst ${newinst}"
done
exit

 

relocate_service.sh


#!/bin/sh
echo "================================================================================"
echo "++++++ Saved Configuration"
ls -l /tmp/*.conf
cat /tmp/services.conf
cat /tmp/instance.conf
echo "================================================================================"
echo "++++++ Relocate Configuration"
export svc=`tail -1 /tmp/services.conf | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
export oldinst=`head -1 /tmp/instance.conf`
export newinst=`tail -1 /tmp/instance.conf`
env|egrep 'svc|inst'|sort
echo "================================================================================"
set -x
srvctl status database -d ${db} -v
IFS=","
for s in ${svc}
do
srvctl relocate service -db ${db} -service ${s} -oldinst ${oldinst} -newinst ${newinst}
done
srvctl status database -d ${db} -v
exit

 

12.1 Improved Service Failover

Filed under: 12c,RAC — mdinh @ 12:51 pm

11gR2 Database Services and Instance Shutdown

The thought of having to manually relocate dozens of services was not very appealing.

As it turns out, there is no need to manually relocate services.

srvctl stop instance -db orclcdb -instance orclcdb1 -failover will do the trick.

Comparing the 2 commands, 12c is a lot clearer / cleaner.

12c:
srvctl add service -db orclcdb -service DBA_TEST -preferred orclcdb1 -available orclcdb2 -failovertype SELECT -tafpolicy BASIC

11g:
srvctl add service -d orclcdb -s DBA_TEST -P BASIC -e SELECT -r orclcdb1 -a orclcdb2

DEMO:

$ srvctl config service -d orclcdb -s DBA_TEST|egrep -i 'Service name|Preferred instances|Available instances|failover'

Service name: DBA_TEST
Failover type: SELECT
Failover method:
TAF failover retries:
TAF failover delay:
Preferred instances: orclcdb1
Available instances: orclcdb2

$ srvctl status database -d orclcdb

Instance orclcdb1 is running on node racnode-dc1-1
Instance orclcdb2 is running on node racnode-dc1-2

$ sqlplus mdinh/mdinh@dbatest @t.sql

SQL*Plus: Release 12.1.0.2.0 Production on Sun Nov 5 04:17:56 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sun Nov 05 2017 04:15:29 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options


   INST_ID STARTUP_TIME
---------- -----------------------------
         1 05-NOV-2017 04:12:55
         2 05-NOV-2017 04:14:49


   INST_ID FAILOVER_TYPE FAILOVER_M FAI
---------- ------------- ---------- ---
         1 NONE          NONE       NO
         1 SELECT        BASIC      NO
         2 NONE          NONE       NO

04:17:57 MDINH @ dbatest:>host
[oracle@racnode-dc1-1 ~]$ srvctl stop instance -db orclcdb -instance orclcdb1 -failover;date
Sun Nov 5 04:18:34 CET 2017
[oracle@racnode-dc1-1 ~]$ exit
exit

04:18:37 MDINH @ dbatest:>@t.sql

   INST_ID STARTUP_TIME
---------- -----------------------------
         2 05-NOV-2017 04:14:49


   INST_ID FAILOVER_TYPE FAILOVER_M FAI
---------- ------------- ---------- ---
         2 SELECT        BASIC      YES

04:18:40 MDINH @ dbatest:>

April 1, 2017

Decipher EM Error Message for RAC

Filed under: RAC — mdinh @ 3:24 am

I am not sure if there is a way to have EM display the actual commands it uses to check and alert errors.

It would be nice to be able to verify using the same syntax.

Examples of errors I was paged for:

Message=ora.net2.network has 1 instances in OFFLINE State
Key Value=resource_ora.network.type_ora.net2.network
Message=ora.host01_2.vip has 1 instances in OFFLINE State
Key Value=resource_ora.cluster_vip_net2.type_ora.host01_2.vip

Of course, crsctl stat res -t can be used and the result is 170 lines output.

I finally figured out to to simplify the output.

Find resource type:

crsctl stat res|grep -i type|sort -u

TYPE=app.appvipx.type
TYPE=local_resource
TYPE=ora.asm.type
TYPE=ora.cluster_vip_net1.type
TYPE=ora.cluster_vip_net2.type
TYPE=ora.cvu.type
TYPE=ora.database.type
TYPE=ora.diskgroup.type
TYPE=ora.listener.type
TYPE=ora.mgmtdb.type
TYPE=ora.mgmtlsnr.type
TYPE=ora.network.type
TYPE=ora.oc4j.type
TYPE=ora.ons.type
TYPE=ora.scan_listener.type
TYPE=ora.scan_vip.type
TYPE=ora.service.type
TYPE=xag.goldengate.type

Check state for resource type:

crsctl stat res -w "TYPE = ora.network.type"

NAME=ora.net1.network
TYPE=ora.network.type
TARGET=ONLINE               , ONLINE
STATE=ONLINE on host01, ONLINE on host02

NAME=ora.net2.network
TYPE=ora.network.type
TARGET=ONLINE               , ONLINE
STATE=ONLINE on host01, ONLINE on host02

crsctl stat res -w "TYPE = ora.cluster_vip_net1.type"

NAME=ora.host01.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on host01

NAME=ora.host02.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on host02

crsctl stat res -w "TYPE = ora.cluster_vip_net2.type"

NAME=ora.host01_2.vip
TYPE=ora.cluster_vip_net2.type
TARGET=ONLINE
STATE=ONLINE on host01

NAME=ora.host02_2.vip
TYPE=ora.cluster_vip_net2.type
TARGET=ONLINE
STATE=ONLINE on host02

March 26, 2017

racattack-ansible-oracle Up and Running

Filed under: RAC,Vagrant,VirtualBox — mdinh @ 2:04 pm

From a time long ago – https://mdinh.wordpress.com/2016/12/04/toys-for-when-you-i-are-bored/

With help from oravirt, I was able to install RAC VMs.

At this point, only the VM servers have been created and GI/DB are not installed; that’s coming up at some point.

Some clarification for setup=standard vagrant provision

setup=standard (shell environment variable)

vagrant provision (executable)

This is where the confusion was at first.

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ setup=standard vagrant provision

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51

======================================================================

E:\racattack-ansible-oracle>setup=standard vagrant provision
'setup' is not recognized as an internal or external command,
operable program or batch file.

E:\racattack-ansible-oracle>

Follow https://github.com/racattack/racattack-ansible-oracle

There were some errors but seems to be working fine.

Note: I used Git Bash this time around vs Window CMD.

One improvements I would make if I ever or whenever get good enough on the subject is to have shared folders for linuxamd64_12102*.zip use existing locations.

The way most vagrants are configure, will need multiple copies of the same binaries.

Alternatively, edit VM shared folder manually.

falcon@falconidae MINGW64 /e
$ git clone --recursive https://github.com/racattack/racattack-ansible-oracle
Cloning into 'racattack-ansible-oracle'...
remote: Counting objects: 320, done.
Receiving objects:  79%remote: Total 320 (delta 0), reused 0 (delta 0), pack-reused 320
Receiving objects: 100% (320/320), 52.22 KiB | 0 bytes/s, done.
Resolving deltas: 100% (210/210), done.
Submodule 'stagefiles/ansible-oracle' (https://github.com/oravirt/ansible-oracle) registered for path 'stagefiles/ansible-oracle'
Cloning into 'E:/racattack-ansible-oracle/stagefiles/ansible-oracle'...
remote: Counting objects: 2061, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 2061 (delta 1), reused 0 (delta 0), pack-reused 2052
Receiving objects: 100% (2061/2061), 517.76 KiB | 0 bytes/s, done.
Resolving deltas: 100% (954/954), done.
Submodule path 'stagefiles/ansible-oracle': checked out '00651e0caf9a876fcefe51d21e44a6e78c313e76'

======================================================================

falcon@falconidae MINGW64 /e
$ cd racattack-ansible-oracle

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ ls -l
total 20
drwxr-xr-x 1 falcon 197121    0 Mar 26 05:45 12cR1/
-rw-r--r-- 1 falcon 197121 3863 Mar 26 05:45 README.md
drwxr-xr-x 1 falcon 197121    0 Mar 26 05:45 stagefiles/
-rw-r--r-- 1 falcon 197121 9706 Mar 26 05:45 Vagrantfile

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vi Vagrantfile

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ ls -l 12cR1/*.zip
-rw-r--r-- 1 falcon 197121 1673544724 Mar 25 13:20 12cR1/linuxamd64_12102_database_1of2.zip
-rw-r--r-- 1 falcon 197121 1014530602 Mar 25 13:32 12cR1/linuxamd64_12102_database_2of2.zip
-rw-r--r-- 1 falcon 197121 1747043545 Mar 25 13:44 12cR1/linuxamd64_12102_grid_1of2.zip
-rw-r--r-- 1 falcon 197121  646972897 Mar 25 13:42 12cR1/linuxamd64_12102_grid_2of2.zip

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant status

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave
on first boot shared disks will be created, this will take some time

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Current machine states:

collabn2                  not created (virtualbox)
collabn1                  not created (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant up

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave
on first boot shared disks will be created, this will take some time

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Bringing machine 'collabn2' up with 'virtualbox' provider...
Bringing machine 'collabn1' up with 'virtualbox' provider...
==> collabn2: Box 'kikitux/oracle6-racattack' could not be found. Attempting to find and install...
    collabn2: Box Provider: virtualbox
    collabn2: Box Version: >= 0
==> collabn2: Loading metadata for box 'kikitux/oracle6-racattack'
    collabn2: URL: https://atlas.hashicorp.com/kikitux/oracle6-racattack
==> collabn2: Adding box 'kikitux/oracle6-racattack' (v16.01.01) for provider: virtualbox
    collabn2: Downloading: https://atlas.hashicorp.com/kikitux/boxes/oracle6-racattack/versions/16.01.01/providers/virtualbox.box
    collabn2:
==> collabn2: Successfully added box 'kikitux/oracle6-racattack' (v16.01.01) for 'virtualbox'!
==> collabn2: Importing base box 'kikitux/oracle6-racattack'...
==> collabn2: Matching MAC address for NAT networking...
==> collabn2: Checking if box 'kikitux/oracle6-racattack' is up to date...
==> collabn2: Setting the name of the VM: collabn2.1703260556
==> collabn2: Fixed port collision for 22 => 2222. Now on port 2200.
==> collabn2: Clearing any previously set network interfaces...
==> collabn2: Preparing network interfaces based on configuration...
    collabn2: Adapter 1: nat
    collabn2: Adapter 2: hostonly
    collabn2: Adapter 3: hostonly
==> collabn2: Forwarding ports...
    collabn2: 22 (guest) => 2200 (host) (adapter 1)
==> collabn2: Running 'pre-boot' VM customizations...
==> collabn2: Booting VM...
==> collabn2: Waiting for machine to boot. This may take a few minutes...
    collabn2: SSH address: 127.0.0.1:2200
    collabn2: SSH username: vagrant
    collabn2: SSH auth method: private key
    collabn2: Warning: Remote connection disconnect. Retrying...
==> collabn2: Machine booted and ready!
[collabn2] GuestAdditions versions on your host (5.1.18) and guest (5.0.0) do not match.
Loaded plugins: security

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Setting up Install Process
http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64 error was
14: PYCURL ERROR 6 - "Couldn't resolve host 'mirrors.fedoraproject.org'"
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Package kernel-uek-devel-2.6.39-400.250.9.el6uek.x86_64 already installed and latest version
Package gcc-4.4.7-16.el6.x86_64 already installed and latest version
Package 1:make-3.81-20.el6.x86_64 already installed and latest version
Package 4:perl-5.10.1-141.el6.x86_64 already installed and latest version
Package bzip2-1.0.5-7.el6_0.x86_64 already installed and latest version
Nothing to do

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Copy iso file D:\Program Files\Oracle\VirtualBox\VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 5.1.18 - guest version is 5.0.0
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.1.18 Guest Additions for Linux...........
VirtualBox Guest Additions installer
Removing installed version 5.0.0 of VirtualBox Guest Additions...
Stopping VirtualBox Additions [FAILED]
(Cannot unload module vboxguest)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Removing existing VirtualBox non-DKMS kernel modules[  OK  ]
[  OK  ] VirtualBox Guest Addition service [  OK  ]
Copying additional installer modules ...
Installing additional modules ...
vboxadd.sh: Building Guest Additions kernel modules.
vboxadd.sh: You should restart your guest to make sure the new modules are actually used.
vboxadd.sh: Starting the VirtualBox Guest Additions.

Could not find the X.Org or XFree86 Window System, skipping.


Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
==> collabn2: Checking for guest additions in VM...
    collabn2: The guest additions on this VM do not match the installed version of
    collabn2: VirtualBox! In most cases this is fine, but in rare cases it can
    collabn2: prevent things such as shared folders from working properly. If you see
    collabn2: shared folder errors, please make sure the guest additions within the
    collabn2: virtual machine match the version of VirtualBox you have installed on
    collabn2: your host and reload your VM.
    collabn2:
    collabn2: Guest Additions Version: 5.0.0
    collabn2: VirtualBox Version: 5.1
==> collabn2: Setting hostname...
==> collabn2: Configuring and enabling network interfaces...
==> collabn2: Mounting shared folders...
    collabn2: /vagrant => E:/racattack-ansible-oracle
    collabn2: /media/sf_12cR1 => E:/racattack-ansible-oracle/12cR1
==> collabn2: Detected mount owner ID within mount options. (uid: 54320 guestpath: /media/sf_12cR1)
==> collabn2: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/sf_12cR1)
    collabn2: /media/stagefiles => E:/racattack-ansible-oracle/stagefiles
==> collabn2: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/stagefiles)
==> collabn2: Running provisioner: shell...
    collabn2: Running: inline script
==> collabn2: overwriting /etc/resolv.conf
==> collabn2: Running provisioner: shell...
    collabn2: Running: inline script
==> collabn2: Stopping named:
==> collabn2: [  OK  ]
==> collabn2: wrote key file "/etc/rndc.key"
==> collabn2: Stopping named:
==> collabn2: [  OK  ]
==> collabn2: Starting named:
==> collabn2: [  OK  ]
==> collabn2: successfully completed named steps
==> collabn1: Box 'kikitux/oracle6-racattack' could not be found. Attempting to find and install...
    collabn1: Box Provider: virtualbox
    collabn1: Box Version: >= 0
==> collabn1: Loading metadata for box 'kikitux/oracle6-racattack'
    collabn1: URL: https://atlas.hashicorp.com/kikitux/oracle6-racattack
==> collabn1: Adding box 'kikitux/oracle6-racattack' (v16.01.01) for provider: virtualbox
==> collabn1: Importing base box 'kikitux/oracle6-racattack'...
==> collabn1: Matching MAC address for NAT networking...
==> collabn1: Checking if box 'kikitux/oracle6-racattack' is up to date...
==> collabn1: Setting the name of the VM: collabn1.1703260604
==> collabn1: Fixed port collision for 22 => 2222. Now on port 2201.
==> collabn1: Clearing any previously set network interfaces...
==> collabn1: Preparing network interfaces based on configuration...
    collabn1: Adapter 1: nat
    collabn1: Adapter 2: hostonly
    collabn1: Adapter 3: hostonly
==> collabn1: Forwarding ports...
    collabn1: 22 (guest) => 2201 (host) (adapter 1)
==> collabn1: Running 'pre-boot' VM customizations...
==> collabn1: Booting VM...
==> collabn1: Waiting for machine to boot. This may take a few minutes...
    collabn1: SSH address: 127.0.0.1:2201
    collabn1: SSH username: vagrant
    collabn1: SSH auth method: private key
    collabn1: Warning: Remote connection disconnect. Retrying...
==> collabn1: Machine booted and ready!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[collabn1] GuestAdditions versions on your host (5.1.18) and guest (5.0.0) do not match.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Loaded plugins: security
Setting up Install Process
http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64 error was
14: PYCURL ERROR 6 - "Couldn't resolve host 'mirrors.fedoraproject.org'"
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Package kernel-uek-devel-2.6.39-400.250.9.el6uek.x86_64 already installed and latest version
Package gcc-4.4.7-16.el6.x86_64 already installed and latest version
Package 1:make-3.81-20.el6.x86_64 already installed and latest version
Package 4:perl-5.10.1-141.el6.x86_64 already installed and latest version
Package bzip2-1.0.5-7.el6_0.x86_64 already installed and latest version
Nothing to do
Copy iso file D:\Program Files\Oracle\VirtualBox\VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 5.1.18 - guest version is 5.0.0
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.1.18 Guest Additions for Linux...........
VirtualBox Guest Additions installer
Removing installed version 5.0.0 of VirtualBox Guest Additions...
Stopping VirtualBox Additions [FAILED]
(Cannot unload module vboxguest)
Removing existing VirtualBox non-DKMS kernel modules[  OK  ]
[  OK  ] VirtualBox Guest Addition service [  OK  ]
Copying additional installer modules ...
Installing additional modules ...
vboxadd.sh: Building Guest Additions kernel modules.
vboxadd.sh: You should restart your guest to make sure the new modules are actually used.
vboxadd.sh: Starting the VirtualBox Guest Additions.

Could not find the X.Org or XFree86 Window System, skipping.


Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
==> collabn1: Checking for guest additions in VM...
    collabn1: The guest additions on this VM do not match the installed version of
    collabn1: VirtualBox! In most cases this is fine, but in rare cases it can
    collabn1: prevent things such as shared folders from working properly. If you see
    collabn1: shared folder errors, please make sure the guest additions within the
    collabn1: virtual machine match the version of VirtualBox you have installed on
    collabn1: your host and reload your VM.
    collabn1:
    collabn1: Guest Additions Version: 5.0.0
    collabn1: VirtualBox Version: 5.1
==> collabn1: Setting hostname...
==> collabn1: Configuring and enabling network interfaces...
==> collabn1: Mounting shared folders...
    collabn1: /vagrant => E:/racattack-ansible-oracle
    collabn1: /media/sf_12cR1 => E:/racattack-ansible-oracle/12cR1
==> collabn1: Detected mount owner ID within mount options. (uid: 54320 guestpath: /media/sf_12cR1)
==> collabn1: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/sf_12cR1)
    collabn1: /media/stagefiles => E:/racattack-ansible-oracle/stagefiles
==> collabn1: Detected mount owner ID within mount options. (uid: 1000 guestpath: /media/stagefiles)
==> collabn1: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/stagefiles)
==> collabn1: Running provisioner: shell...
    collabn1: Running: inline script
==> collabn1: overwriting /etc/resolv.conf
==> collabn1: Running provisioner: shell...
    collabn1: Running: inline script
==> collabn1: wrote key file "/etc/rndc.key"
==> collabn1: Stopping named:
==> collabn1: [  OK  ]
==> collabn1: Starting named:
==> collabn1: [  OK  ]
==> collabn1: successfully completed named steps

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant status

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Current machine states:

collabn2                  running (virtualbox)
collabn1                  running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vboxmanage list runningvms
"hashicorp_default_1490531708969_67077" {ab780940-aeef-4e4c-a868-6b5c6f81af2b}
"collabn2.1703260556" {71023f40-8635-4664-8c6e-730a1bfbe0e1}
"collabn1.1703260604" {d20095ef-e5ed-4554-96e0-0168125b3dd8}

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant ssh collabn1

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Last login: Sun Mar 26 13:20:43 2017 from 10.0.2.2
[vagrant@collabn1 ~]$ ls -al
total 32
drwx------  3 vagrant vagrant 4096 Mar 26 13:10 .
drwxr-xr-x. 5 root    root    4096 Aug  4  2015 ..
-rw-------  1 vagrant vagrant  139 Mar 26 13:22 .bash_history
-rw-r--r--  1 vagrant vagrant   18 May  7  2015 .bash_logout
-rw-r--r--  1 vagrant vagrant  176 May  7  2015 .bash_profile
-rw-r--r--  1 vagrant vagrant  124 May  7  2015 .bashrc
-rw-r--r--  1 vagrant vagrant  121 Dec 20  2012 .kshrc
drwx------  2 vagrant vagrant 4096 Aug  4  2015 .ssh

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Don't know password and not same as username.
[vagrant@collabn1 ~]$ su - oracle
Password:
su: incorrect password
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

[vagrant@collabn1 ~]$ sudo su - oracle
[oracle@collabn1 ~]$ exit
logout
[vagrant@collabn1 ~]$ sudo su -
[root@collabn1 ~]# cat /etc/passwd | column -t -s :
root       x  0      0      root                          /root                /bin/bash
bin        x  1      1      bin                           /bin                 /sbin/nologin
daemon     x  2      2      daemon                        /sbin                /sbin/nologin
adm        x  3      4      adm                           /var/adm             /sbin/nologin
lp         x  4      7      lp                            /var/spool/lpd       /sbin/nologin
sync       x  5      0      sync                          /sbin                /bin/sync
shutdown   x  6      0      shutdown                      /sbin                /sbin/shutdown
halt       x  7      0      halt                          /sbin                /sbin/halt
mail       x  8      12     mail                          /var/spool/mail      /sbin/nologin
uucp       x  10     14     uucp                          /var/spool/uucp      /sbin/nologin
operator   x  11     0      operator                      /root                /sbin/nologin
games      x  12     100    games                         /usr/games           /sbin/nologin
gopher     x  13     30     gopher                        /var/gopher          /sbin/nologin
ftp        x  14     50     FTP User                      /var/ftp             /sbin/nologin
nobody     x  99     99     Nobody                        /                    /sbin/nologin
vcsa       x  69     69     virtual console memory owner  /dev                 /sbin/nologin
rpc        x  32     32     Rpcbind Daemon                /var/cache/rpcbind   /sbin/nologin
rpcuser    x  29     29     RPC Service User              /var/lib/nfs         /sbin/nologin
nfsnobody  x  65534  65534  Anonymous NFS User            /var/lib/nfs         /sbin/nologin
saslauth   x  499    76     "Saslauthd user"              /var/empty/saslauth  /sbin/nologin
postfix    x  89     89     /var/spool/postfix            /sbin/nologin
sshd       x  74     74     Privilege-separated SSH       /var/empty/sshd      /sbin/nologin
named      x  25     25     Named                         /var/named           /sbin/nologin
dbus       x  81     81     System message bus            /                    /sbin/nologin
oracle     x  54321  54321  /home/oracle                  /bin/bash
applmgr    x  54322  54321  /home/applmgr                 /bin/bash
puppet     x  52     52     Puppet                        /var/lib/puppet      /sbin/nologin
vboxadd    x  498    1      /var/run/vboxadd              /bin/false
vagrant    x  1000   1000   /home/vagrant                 /bin/bash
[root@collabn1 ~]# df -h
Filesystem        Size  Used Avail Use% Mounted on
/dev/sda3          27G  2.6G   23G  11% /
tmpfs             1.5G     0  1.5G   0% /dev/shm
/dev/sda1         485M   93M  367M  21% /boot
/dev/sdb1          50G  180M   48G   1% /u01
vagrant           466G  370G   97G  80% /vagrant
media_sf_12cR1    466G  370G   97G  80% /media/sf_12cR1
media_stagefiles  466G  370G   97G  80% /media/stagefiles
[root@collabn1 ~]# cd /media/sf_12cR1/
[root@collabn1 sf_12cR1]# ls -l
total 4962982
-rwxrwxrwx 1 54320 oinstall          0 Mar 26 12:45 keep
-rwxrwxrwx 1 54320 oinstall 1673544724 Mar 25 20:20 linuxamd64_12102_database_1of2.zip
-rwxrwxrwx 1 54320 oinstall 1014530602 Mar 25 20:32 linuxamd64_12102_database_2of2.zip
-rwxrwxrwx 1 54320 oinstall 1747043545 Mar 25 20:44 linuxamd64_12102_grid_1of2.zip
-rwxrwxrwx 1 54320 oinstall  646972897 Mar 25 20:42 linuxamd64_12102_grid_2of2.zip
-rwxrwxrwx 1 54320 oinstall        181 Mar 26 12:45 readme.txt
[root@collabn1 sf_12cR1]#

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant ssh collabn2

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave
Last login: Sun Mar 26 13:10:38 2017 from 10.0.2.2
[vagrant@collabn2 ~]$ ls -al
total 32
drwx------  3 vagrant vagrant 4096 Mar 26 13:14 .
drwxr-xr-x. 5 root    root    4096 Aug  4  2015 ..
-rw-------  1 vagrant vagrant   56 Mar 26 13:14 .bash_history
-rw-r--r--  1 vagrant vagrant   18 May  7  2015 .bash_logout
-rw-r--r--  1 vagrant vagrant  176 May  7  2015 .bash_profile
-rw-r--r--  1 vagrant vagrant  124 May  7  2015 .bashrc
-rw-r--r--  1 vagrant vagrant  121 Dec 20  2012 .kshrc
drwx------  2 vagrant vagrant 4096 Aug  4  2015 .ssh
[vagrant@collabn2 ~]$ sudo su -
[root@collabn2 ~]# cat /etc/passwd | column -t -s :
root       x  0      0      root                          /root                /bin/bash
bin        x  1      1      bin                           /bin                 /sbin/nologin
daemon     x  2      2      daemon                        /sbin                /sbin/nologin
adm        x  3      4      adm                           /var/adm             /sbin/nologin
lp         x  4      7      lp                            /var/spool/lpd       /sbin/nologin
sync       x  5      0      sync                          /sbin                /bin/sync
shutdown   x  6      0      shutdown                      /sbin                /sbin/shutdown
halt       x  7      0      halt                          /sbin                /sbin/halt
mail       x  8      12     mail                          /var/spool/mail      /sbin/nologin
uucp       x  10     14     uucp                          /var/spool/uucp      /sbin/nologin
operator   x  11     0      operator                      /root                /sbin/nologin
games      x  12     100    games                         /usr/games           /sbin/nologin
gopher     x  13     30     gopher                        /var/gopher          /sbin/nologin
ftp        x  14     50     FTP User                      /var/ftp             /sbin/nologin
nobody     x  99     99     Nobody                        /                    /sbin/nologin
vcsa       x  69     69     virtual console memory owner  /dev                 /sbin/nologin
rpc        x  32     32     Rpcbind Daemon                /var/cache/rpcbind   /sbin/nologin
rpcuser    x  29     29     RPC Service User              /var/lib/nfs         /sbin/nologin
nfsnobody  x  65534  65534  Anonymous NFS User            /var/lib/nfs         /sbin/nologin
saslauth   x  499    76     "Saslauthd user"              /var/empty/saslauth  /sbin/nologin
postfix    x  89     89     /var/spool/postfix            /sbin/nologin
sshd       x  74     74     Privilege-separated SSH       /var/empty/sshd      /sbin/nologin
named      x  25     25     Named                         /var/named           /sbin/nologin
dbus       x  81     81     System message bus            /                    /sbin/nologin
oracle     x  54321  54321  /home/oracle                  /bin/bash
applmgr    x  54322  54321  /home/applmgr                 /bin/bash
puppet     x  52     52     Puppet                        /var/lib/puppet      /sbin/nologin
vboxadd    x  498    1      /var/run/vboxadd              /bin/false
vagrant    x  1000   1000   /home/vagrant                 /bin/bash
[root@collabn2 ~]# df -h
Filesystem        Size  Used Avail Use% Mounted on
/dev/sda3          27G  2.6G   23G  11% /
tmpfs             1.5G     0  1.5G   0% /dev/shm
/dev/sda1         485M   93M  367M  21% /boot
/dev/sdb1          50G  180M   48G   1% /u01
vagrant           466G  370G   97G  80% /vagrant
media_sf_12cR1    466G  370G   97G  80% /media/sf_12cR1
media_stagefiles  466G  370G   97G  80% /media/stagefiles
[root@collabn2 ~]# cd /media/sf_12cR1/
[root@collabn2 sf_12cR1]# ls -l
total 4962982
-rwxrwxrwx 1 54320 oinstall          0 Mar 26 12:45 keep
-rwxrwxrwx 1 54320 oinstall 1673544724 Mar 25 20:20 linuxamd64_12102_database_1of2.zip
-rwxrwxrwx 1 54320 oinstall 1014530602 Mar 25 20:32 linuxamd64_12102_database_2of2.zip
-rwxrwxrwx 1 54320 oinstall 1747043545 Mar 25 20:44 linuxamd64_12102_grid_1of2.zip
-rwxrwxrwx 1 54320 oinstall  646972897 Mar 25 20:42 linuxamd64_12102_grid_2of2.zip
-rwxrwxrwx 1 54320 oinstall        181 Mar 26 12:45 readme.txt
[root@collabn2 sf_12cR1]#

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant ssh-config

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Host collabn2
  HostName 127.0.0.1
  User vagrant
  Port 2200
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile C:/Users/falcon/.vagrant.d/insecure_private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host collabn1
  HostName 127.0.0.1
  User vagrant
  Port 2201
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile C:/Users/falcon/.vagrant.d/insecure_private_key
  IdentitiesOnly yes
  LogLevel FATAL


falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$

Using username "vagrant".
Authenticating with public key "imported-openssh-key"
Last login: Sun Mar 26 14:00:36 2017 from 10.0.2.2
[vagrant@collabn1 ~]$

Using username "vagrant".
Authenticating with public key "imported-openssh-key"
Last login: Sun Mar 26 14:01:14 2017 from 10.0.2.2
[vagrant@collabn2 ~]$

February 9, 2017

Steps to Recreate Central Inventory in Real Applications Clusters (Doc ID 413939.1)

Filed under: 12c,RAC — mdinh @ 3:13 am

$ echo $ORACLE_HOME

/u01/app/oracle/product/12.1.0/db_1

$ $ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME

Oracle Interim Patch Installer version 12.1.0.1.3
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/db_1/oraInst.loc
OPatch version    : 12.1.0.1.3
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/opatch/opatch2017-02-08_15-56-03PM_1.log

List of Homes on this system:

Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
   Oracle Home dir. path does not exist in Central Inventory
   Oracle Home is a symbolic link
   Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73

This happened due to error during install. – oraInventory mismatch.

$ cat /etc/oraInst.loc
inst_group=oinstall
inventory_loc=/u01/app/oraInventory

$ cd /u01/software/database
$ export DISTRIB=`pwd`
$ ./runInstaller -silent -showProgress -waitforcompletion -force -ignorePrereq -responseFile $DISTRIB/response/db_install.rsp \
> oracle.install.option=INSTALL_DB_SWONLY \
> UNIX_GROUP_NAME=oinstall \
> INVENTORY_LOCATION=/u01/app/oracle/oraInventory \

Backup oraInventory for both nodes and attachHome

$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -silent -ignoreSysPrereqs -attachHome \
ORACLE_HOME="/u02/app/12.1.0/grid" ORACLE_HOME_NAME="OraGI12Home1" \
LOCAL_NODE="node01" CLUSTER_NODES="{node01,node02}" CRS=true
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

$ ./runInstaller -silent -ignoreSysPrereqs -attachHome \
ORACLE_HOME="/u01/app/oracle/product/12.1.0/db_1" ORACLE_HOME_NAME="OraDB12Home1" \
LOCAL_NODE="node01" CLUSTER_NODES="{node01,node02}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

February 8, 2017

runcluvfy.sh -pre crsinst NTP failed PRVF-07590 PRVG-01017

Filed under: 12c,RAC — mdinh @ 12:56 pm

12c (12.1.0.2.0) RAC Oracle Linux Server release 7.3
/u01/software/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose

Starting Clock synchronization checks using Network Time Protocol(NTP)...

Checking existence of NTP configuration file "/etc/ntp.conf" across nodes
  Node Name                             File exists?            
  ------------------------------------  ------------------------
  node02                                yes                     
  node01                                yes                     
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?                
  ------------------------------------  ------------------------
  node02                                no                      
  node01                                yes                     
PRVF-7590 : "ntpd" is not running on node "node02"
PRVG-1017 : NTP configuration file is present on nodes "node02" on which NTP daemon or service was not running
Result: Clock synchronization check using Network Time Protocol(NTP) failed

NTP was indeed running on both nodes.
The issue is /var/run/ntpd.pid does not exist on the failed node.
NTP was started with incorrect options.

GOOD:

# cat /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# systemctl status ntpd.service
ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-02-03 20:37:18 CST; 3 days ago
 Main PID: 22517 (ntpd)
   CGroup: /system.slice/ntpd.service
           /usr/sbin/ntpd -u ntp:ntp -x -u ntp:ntp -p /var/run/ntpd.pid

# ll /var/run/ntpd.*
-rw-r--r-- 1 root root 5 Feb  3 20:37 /var/run/ntpd.pid

BAD:

# cat /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# systemctl status ntpd.service
ntpd.service - Network Time Service           
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-02-03 18:10:23 CST; 3 days ago
 Main PID: 22403 (ntpd)
   CGroup: /system.slice/ntpd.service
           /usr/sbin/ntpd -u ntp:ntp -g           

# ll /var/run/ntpd.*
ls: cannot access /var/run/ntpd.*: No such file or directory

SOLUTION:

Restart ntpd on failed node.

February 5, 2017

12c Database spfile Parameter alias is not created in ASM Diskgroup (Doc ID 1950769.1)

Filed under: 12c,RAC — mdinh @ 8:41 pm

This is new as of 12.1.0.2.

$ srvctl config database -d hawk
Database unique name: hawk
Database name: hawk
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/HAWK/PARAMETERFILE/spfile.264.934897017
Password file: +DATA/hawk/orapwhawk
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: 
Database instances: hawk1,hawk2
Configured nodes: hawk01,hawk02
Database is administrator managed

Alias will need to be created manually.

SQL> ALTER DISKGROUP dg1 ADD ALIAS '+DG1/rac12c/spfilerac12c.ora'  FOR  '+dg1/rac12c/parameterfile/spfile.271.860077229'; 

WARNING:
I did not create alias and was curious why alias was not created. Now I know.

What I did: echo “SPFILE=’+DATA/HAWK/PARAMETERFILE/spfile.264.934897017” > $ORACLE_HOME/dbs/init$ORACLE_SID.ora

Setting SPFILE Parameter Values for Oracle RAC
http://docs.oracle.com/database/121/RACAD/admin.htm#RACAD815

October 22, 2016

Oracle Health Check

Filed under: 11g,oracle,RAC — mdinh @ 12:44 pm

Currently, I am working on health check for ODA and find there are too many tools with disparate information.

I am sure there are more than the ones listed below and stopped searching.

ODA Oracle Database Appliance orachk Healthcheck (Doc ID 2126926.1)
Multiplexing Redolog and Control File on ODA (Doc ID 2086289.1)

ORAchk – Health Checks for the Oracle Stack (Doc ID 1268927.2)
How to Perform a Health Check on the Database (Doc ID 122669.1)
Health Monitor (Doc ID 466920.1)

Oracle Configuration Manager Quick Start Guide (Doc ID 728988.5)
Pre-12+ OCM Collectors to Be Decommissioned Summer of 2015 (Doc ID 1986521.1)

cluvfy comp healthcheck

One example found:  ORAchk will report if less than 3 SCANs configured while cluvfy comp healthcheck (11.2) does not.

Intesteresting side track: < 3 not escaped is   ❤

Complete cluvfy comp healthcheck  results plus how to create database user CVUSYS (WARNING: ~1600 lines).

Some failures from cluvfy comp healthcheck.

******************************************************************************************
Database recommendation checks for "emu"
******************************************************************************************

Verification Check        :  DB Log Mode
Verification Description  :  Checks the database log archiving mode
Verification Result       :  NOT MET
Verification Summary      :  Check for DB Log Mode failed
Additional Details        :  If the database is in log archiving mode, then it is
                             always desirable and advisable to upgrade the database in
                             noarchivelog mode as that will reduce the time taken to
                             upgrade the database. After the upgrade, the database can
                             be reverted to the archivelog mode.
References (URLs/Notes)   :  https://support.oracle.com/CSP/main/article?cmd=show&type=N
                             OT&id=429825.1

Database(Instance)  Status    Expected Value                Actual Value
------------------------------------------------------------------------------------------

emu                 FAILED    db_log_mode = NOARCHIVELOG    db_log_mode = ARCHIVELOG

__________________________________________________________________________________________

Database(Instance)  Error details
------------------------------------------------------------------------------------------

emu                 Error - NOARCHIVELOG mode is recommended when upgrading
                    Cause - Cause Of Problem Not Available
                    Action - User Action Not Available
__________________________________________________________________________________________

Verification Check        :  Users Granted CONNECT Role
Verification Description  :  Checks for the presence of any users with CONNECT role
Verification Result       :  NOT MET
Verification Summary      :  Check for Users Granted CONNECT Role failed

Database(Instance)  Status    Expected Value                Actual Value
------------------------------------------------------------------------------------------

emu                 FAILED    connect_role_grantees = 0     connect_role_grantees = 5

__________________________________________________________________________________________

Database(Instance)  Error details
------------------------------------------------------------------------------------------

emu                 Error - CONNECT role granted users found
                    Cause - Cause Of Problem Not Available
                    Action - User Action Not Available
__________________________________________________________________________________________

Does Oracle itself needs a health check?

October 9, 2016

cluvfy is your friend

Filed under: RAC — mdinh @ 11:54 pm

Just a reminder to self to use cluvfy

olsnodes -i -n -s -t
grep 'master node' $CRS_HOME/log/`hostname -s`/cssd/ocssd.*|tail -1

cluvfy stage -pre help
cluvfy stage -post  help

++++++++++


[grid@rac01:+ASM1:/home/grid]
$ olsnodes -i -n -s -t
rac01   1       rac01-vip       Active  Unpinned
rac02   2       rac02-vip       Active  Unpinned

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ env|grep HOME
CRS_HOME=/u01/app/11.2.0.4/grid
HOME=/home/grid
XAG_HOME=/u01/app/grid/xag
ORACLE_HOME=/u01/app/11.2.0.4/grid

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ grep 'master node' $CRS_HOME/log/`hostname -s`/cssd/ocssd.*|tail -1
/u01/app/11.2.0.4/grid/log/rac01/cssd/ocssd.log:2016-10-09 10:48:55.837: 
[    CSSD][28161792]clssgmCMReconfig: reconfiguration successful, 
incarnation 371471500 with 2 nodes, local node number 1, master node number 1

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -pre help

ERROR:
Unexpected symbol "help". See usage for detail.

USAGE:
cluvfy stage {-pre|-post}    [-verbose]

SYNTAX (for Stages):
cluvfy stage -pre cfs -n  -s  [-verbose]
cluvfy stage -pre
                   crsinst -file  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -upgrade [-n ] [-rolling] -src_crshome  -dest_crshome 
                           -dest_version  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -n  [-r {10gR1|10gR2|11gR1|11gR2}]
                           [-c ] [-q ]
                           [-osdba ] [-orainv ]
                           [-asm [-asmgrp ] [-asmdev ]] [-crshome ]
                           [-fixup [-fixupdir ]] [-networks ]
                           [-verbose]
cluvfy stage -pre acfscfg -n  [-asmdev ] [-verbose]
cluvfy stage -pre
                   dbinst -n  [-r {10gR1|10gR2|11gR1|11gR2}] [-osdba ] [-d ]
                          [-fixup [-fixupdir ]] [-verbose]
                   dbinst -upgrade -src_dbhome  [-dbname ] -dest_dbhome  -dest_version 
                          [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre dbcfg -n  -d  [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre hacfg [-osdba ] [-orainv ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre nodeadd -n  [-vip ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -post hwos -n  [-s ] [-verbose]
cluvfy stage -post cfs -n  -f  [-verbose]
cluvfy stage -post crsinst -n  [-verbose]
cluvfy stage -post acfscfg -n  [-verbose]
cluvfy stage -post hacfg [-verbose]
cluvfy stage -post nodeadd -n  [-verbose]
cluvfy stage -post nodedel -n  [-verbose]

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -post  help

ERROR:
Unexpected symbol "help". See usage for detail.

USAGE:
cluvfy stage {-pre|-post}    [-verbose]

SYNTAX (for Stages):
cluvfy stage -pre cfs -n  -s  [-verbose]
cluvfy stage -pre
                   crsinst -file  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -upgrade [-n ] [-rolling] -src_crshome  -dest_crshome 
                           -dest_version  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -n  [-r {10gR1|10gR2|11gR1|11gR2}]
                           [-c ] [-q ]
                           [-osdba ] [-orainv ]
                           [-asm [-asmgrp ] [-asmdev ]] [-crshome ]
                           [-fixup [-fixupdir ]] [-networks ]
                           [-verbose]
cluvfy stage -pre acfscfg -n  [-asmdev ] [-verbose]
cluvfy stage -pre
                   dbinst -n  [-r {10gR1|10gR2|11gR1|11gR2}] [-osdba ] [-d ]
                          [-fixup [-fixupdir ]] [-verbose]
                   dbinst -upgrade -src_dbhome  [-dbname ] -dest_dbhome  -dest_version 
                          [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre dbcfg -n  -d  [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre hacfg [-osdba ] [-orainv ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre nodeadd -n  [-vip ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -post hwos -n  [-s ] [-verbose]
cluvfy stage -post cfs -n  -f  [-verbose]
cluvfy stage -post crsinst -n  [-verbose]
cluvfy stage -post acfscfg -n  [-verbose]
cluvfy stage -post hacfg [-verbose]
cluvfy stage -post nodeadd -n  [-verbose]
cluvfy stage -post nodedel -n  [-verbose]

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -pre crsinst -n rac01,rac02 -fixup

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "rac01"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth2"
Node connectivity passed for interface "eth2"
TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac02:/u01/app/11.2.0.4/grid,rac02:/tmp"
Free disk space check passed for "rac01:/u01/app/11.2.0.4/grid,rac01:/tmp"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed
Default user file creation mask check failed
Check failed on nodes:
        rac02,rac01
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes

Time zone consistency check passed

Starting check for Reverse path filter setting ...

Check for Reverse path filter setting passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ umask
0002
[grid@rac01:+ASM1:/home/grid]
$ ssh rac02 "umask"
0022
[grid@rac0

+++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -post hwos -n rac01,rac02

Performing post-checks for hardware and operating system setup

Checking node reachability...
Node reachability check passed from node "rac01"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth2"
Node connectivity passed for interface "eth2"
TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed

Checking shared storage accessibility...

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sde                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdd                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdg                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdh                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdi                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdf                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdb                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdc                              rac02 rac01

  ACFS                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /acfsmount                            rac02 rac01


Shared storage check was successful on nodes "rac02,rac01"

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Post-check for hardware and operating system setup was successful.
[grid@rac01:+ASM1:/home/grid]
$
Next Page »

Blog at WordPress.com.