Thinking Out Loud

March 26, 2017

racattack-ansible-oracle Up and Running

Filed under: RAC,Vagrant,VirtualBox — mdinh @ 2:04 pm

From a time long ago – https://mdinh.wordpress.com/2016/12/04/toys-for-when-you-i-are-bored/

With help from oravirt, I was able to install RAC VMs.

At this point, only the VM servers have been created and GI/DB are not installed; that’s coming up at some point.

Some clarification for setup=standard vagrant provision

setup=standard (shell environment variable)

vagrant provision (executable)

This is where the confusion was at first.

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ setup=standard vagrant provision

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51

======================================================================

E:\racattack-ansible-oracle>setup=standard vagrant provision
'setup' is not recognized as an internal or external command,
operable program or batch file.

E:\racattack-ansible-oracle>

Follow https://github.com/racattack/racattack-ansible-oracle

There were some errors but seems to be working fine.

Note: I used Git Bash this time around vs Window CMD.

One improvements I would make if I ever or whenever get good enough on the subject is to have shared folders for linuxamd64_12102*.zip use existing locations.

The way most vagrants are configure, will need multiple copies of the same binaries.

Alternatively, edit VM shared folder manually.

falcon@falconidae MINGW64 /e
$ git clone --recursive https://github.com/racattack/racattack-ansible-oracle
Cloning into 'racattack-ansible-oracle'...
remote: Counting objects: 320, done.
Receiving objects:  79%remote: Total 320 (delta 0), reused 0 (delta 0), pack-reused 320
Receiving objects: 100% (320/320), 52.22 KiB | 0 bytes/s, done.
Resolving deltas: 100% (210/210), done.
Submodule 'stagefiles/ansible-oracle' (https://github.com/oravirt/ansible-oracle) registered for path 'stagefiles/ansible-oracle'
Cloning into 'E:/racattack-ansible-oracle/stagefiles/ansible-oracle'...
remote: Counting objects: 2061, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 2061 (delta 1), reused 0 (delta 0), pack-reused 2052
Receiving objects: 100% (2061/2061), 517.76 KiB | 0 bytes/s, done.
Resolving deltas: 100% (954/954), done.
Submodule path 'stagefiles/ansible-oracle': checked out '00651e0caf9a876fcefe51d21e44a6e78c313e76'

======================================================================

falcon@falconidae MINGW64 /e
$ cd racattack-ansible-oracle

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ ls -l
total 20
drwxr-xr-x 1 falcon 197121    0 Mar 26 05:45 12cR1/
-rw-r--r-- 1 falcon 197121 3863 Mar 26 05:45 README.md
drwxr-xr-x 1 falcon 197121    0 Mar 26 05:45 stagefiles/
-rw-r--r-- 1 falcon 197121 9706 Mar 26 05:45 Vagrantfile

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vi Vagrantfile

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ ls -l 12cR1/*.zip
-rw-r--r-- 1 falcon 197121 1673544724 Mar 25 13:20 12cR1/linuxamd64_12102_database_1of2.zip
-rw-r--r-- 1 falcon 197121 1014530602 Mar 25 13:32 12cR1/linuxamd64_12102_database_2of2.zip
-rw-r--r-- 1 falcon 197121 1747043545 Mar 25 13:44 12cR1/linuxamd64_12102_grid_1of2.zip
-rw-r--r-- 1 falcon 197121  646972897 Mar 25 13:42 12cR1/linuxamd64_12102_grid_2of2.zip

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant status

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave
on first boot shared disks will be created, this will take some time

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Current machine states:

collabn2                  not created (virtualbox)
collabn1                  not created (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant up

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave
on first boot shared disks will be created, this will take some time

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Bringing machine 'collabn2' up with 'virtualbox' provider...
Bringing machine 'collabn1' up with 'virtualbox' provider...
==> collabn2: Box 'kikitux/oracle6-racattack' could not be found. Attempting to find and install...
    collabn2: Box Provider: virtualbox
    collabn2: Box Version: >= 0
==> collabn2: Loading metadata for box 'kikitux/oracle6-racattack'
    collabn2: URL: https://atlas.hashicorp.com/kikitux/oracle6-racattack
==> collabn2: Adding box 'kikitux/oracle6-racattack' (v16.01.01) for provider: virtualbox
    collabn2: Downloading: https://atlas.hashicorp.com/kikitux/boxes/oracle6-racattack/versions/16.01.01/providers/virtualbox.box
    collabn2:
==> collabn2: Successfully added box 'kikitux/oracle6-racattack' (v16.01.01) for 'virtualbox'!
==> collabn2: Importing base box 'kikitux/oracle6-racattack'...
==> collabn2: Matching MAC address for NAT networking...
==> collabn2: Checking if box 'kikitux/oracle6-racattack' is up to date...
==> collabn2: Setting the name of the VM: collabn2.1703260556
==> collabn2: Fixed port collision for 22 => 2222. Now on port 2200.
==> collabn2: Clearing any previously set network interfaces...
==> collabn2: Preparing network interfaces based on configuration...
    collabn2: Adapter 1: nat
    collabn2: Adapter 2: hostonly
    collabn2: Adapter 3: hostonly
==> collabn2: Forwarding ports...
    collabn2: 22 (guest) => 2200 (host) (adapter 1)
==> collabn2: Running 'pre-boot' VM customizations...
==> collabn2: Booting VM...
==> collabn2: Waiting for machine to boot. This may take a few minutes...
    collabn2: SSH address: 127.0.0.1:2200
    collabn2: SSH username: vagrant
    collabn2: SSH auth method: private key
    collabn2: Warning: Remote connection disconnect. Retrying...
==> collabn2: Machine booted and ready!
[collabn2] GuestAdditions versions on your host (5.1.18) and guest (5.0.0) do not match.
Loaded plugins: security

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Setting up Install Process
http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64 error was
14: PYCURL ERROR 6 - "Couldn't resolve host 'mirrors.fedoraproject.org'"
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Package kernel-uek-devel-2.6.39-400.250.9.el6uek.x86_64 already installed and latest version
Package gcc-4.4.7-16.el6.x86_64 already installed and latest version
Package 1:make-3.81-20.el6.x86_64 already installed and latest version
Package 4:perl-5.10.1-141.el6.x86_64 already installed and latest version
Package bzip2-1.0.5-7.el6_0.x86_64 already installed and latest version
Nothing to do

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Copy iso file D:\Program Files\Oracle\VirtualBox\VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 5.1.18 - guest version is 5.0.0
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.1.18 Guest Additions for Linux...........
VirtualBox Guest Additions installer
Removing installed version 5.0.0 of VirtualBox Guest Additions...
Stopping VirtualBox Additions [FAILED]
(Cannot unload module vboxguest)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Removing existing VirtualBox non-DKMS kernel modules[  OK  ]
[  OK  ] VirtualBox Guest Addition service [  OK  ]
Copying additional installer modules ...
Installing additional modules ...
vboxadd.sh: Building Guest Additions kernel modules.
vboxadd.sh: You should restart your guest to make sure the new modules are actually used.
vboxadd.sh: Starting the VirtualBox Guest Additions.

Could not find the X.Org or XFree86 Window System, skipping.


Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
==> collabn2: Checking for guest additions in VM...
    collabn2: The guest additions on this VM do not match the installed version of
    collabn2: VirtualBox! In most cases this is fine, but in rare cases it can
    collabn2: prevent things such as shared folders from working properly. If you see
    collabn2: shared folder errors, please make sure the guest additions within the
    collabn2: virtual machine match the version of VirtualBox you have installed on
    collabn2: your host and reload your VM.
    collabn2:
    collabn2: Guest Additions Version: 5.0.0
    collabn2: VirtualBox Version: 5.1
==> collabn2: Setting hostname...
==> collabn2: Configuring and enabling network interfaces...
==> collabn2: Mounting shared folders...
    collabn2: /vagrant => E:/racattack-ansible-oracle
    collabn2: /media/sf_12cR1 => E:/racattack-ansible-oracle/12cR1
==> collabn2: Detected mount owner ID within mount options. (uid: 54320 guestpath: /media/sf_12cR1)
==> collabn2: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/sf_12cR1)
    collabn2: /media/stagefiles => E:/racattack-ansible-oracle/stagefiles
==> collabn2: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/stagefiles)
==> collabn2: Running provisioner: shell...
    collabn2: Running: inline script
==> collabn2: overwriting /etc/resolv.conf
==> collabn2: Running provisioner: shell...
    collabn2: Running: inline script
==> collabn2: Stopping named:
==> collabn2: [  OK  ]
==> collabn2: wrote key file "/etc/rndc.key"
==> collabn2: Stopping named:
==> collabn2: [  OK  ]
==> collabn2: Starting named:
==> collabn2: [  OK  ]
==> collabn2: successfully completed named steps
==> collabn1: Box 'kikitux/oracle6-racattack' could not be found. Attempting to find and install...
    collabn1: Box Provider: virtualbox
    collabn1: Box Version: >= 0
==> collabn1: Loading metadata for box 'kikitux/oracle6-racattack'
    collabn1: URL: https://atlas.hashicorp.com/kikitux/oracle6-racattack
==> collabn1: Adding box 'kikitux/oracle6-racattack' (v16.01.01) for provider: virtualbox
==> collabn1: Importing base box 'kikitux/oracle6-racattack'...
==> collabn1: Matching MAC address for NAT networking...
==> collabn1: Checking if box 'kikitux/oracle6-racattack' is up to date...
==> collabn1: Setting the name of the VM: collabn1.1703260604
==> collabn1: Fixed port collision for 22 => 2222. Now on port 2201.
==> collabn1: Clearing any previously set network interfaces...
==> collabn1: Preparing network interfaces based on configuration...
    collabn1: Adapter 1: nat
    collabn1: Adapter 2: hostonly
    collabn1: Adapter 3: hostonly
==> collabn1: Forwarding ports...
    collabn1: 22 (guest) => 2201 (host) (adapter 1)
==> collabn1: Running 'pre-boot' VM customizations...
==> collabn1: Booting VM...
==> collabn1: Waiting for machine to boot. This may take a few minutes...
    collabn1: SSH address: 127.0.0.1:2201
    collabn1: SSH username: vagrant
    collabn1: SSH auth method: private key
    collabn1: Warning: Remote connection disconnect. Retrying...
==> collabn1: Machine booted and ready!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[collabn1] GuestAdditions versions on your host (5.1.18) and guest (5.0.0) do not match.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Loaded plugins: security
Setting up Install Process
http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'public-yum.oracle.com'"
Trying other mirror.
Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64 error was
14: PYCURL ERROR 6 - "Couldn't resolve host 'mirrors.fedoraproject.org'"
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Package kernel-uek-devel-2.6.39-400.250.9.el6uek.x86_64 already installed and latest version
Package gcc-4.4.7-16.el6.x86_64 already installed and latest version
Package 1:make-3.81-20.el6.x86_64 already installed and latest version
Package 4:perl-5.10.1-141.el6.x86_64 already installed and latest version
Package bzip2-1.0.5-7.el6_0.x86_64 already installed and latest version
Nothing to do
Copy iso file D:\Program Files\Oracle\VirtualBox\VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 5.1.18 - guest version is 5.0.0
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.1.18 Guest Additions for Linux...........
VirtualBox Guest Additions installer
Removing installed version 5.0.0 of VirtualBox Guest Additions...
Stopping VirtualBox Additions [FAILED]
(Cannot unload module vboxguest)
Removing existing VirtualBox non-DKMS kernel modules[  OK  ]
[  OK  ] VirtualBox Guest Addition service [  OK  ]
Copying additional installer modules ...
Installing additional modules ...
vboxadd.sh: Building Guest Additions kernel modules.
vboxadd.sh: You should restart your guest to make sure the new modules are actually used.
vboxadd.sh: Starting the VirtualBox Guest Additions.

Could not find the X.Org or XFree86 Window System, skipping.


Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.0.0
VBoxService inside the vm claims: 5.1.18
Going on, assuming VBoxService is correct...
==> collabn1: Checking for guest additions in VM...
    collabn1: The guest additions on this VM do not match the installed version of
    collabn1: VirtualBox! In most cases this is fine, but in rare cases it can
    collabn1: prevent things such as shared folders from working properly. If you see
    collabn1: shared folder errors, please make sure the guest additions within the
    collabn1: virtual machine match the version of VirtualBox you have installed on
    collabn1: your host and reload your VM.
    collabn1:
    collabn1: Guest Additions Version: 5.0.0
    collabn1: VirtualBox Version: 5.1
==> collabn1: Setting hostname...
==> collabn1: Configuring and enabling network interfaces...
==> collabn1: Mounting shared folders...
    collabn1: /vagrant => E:/racattack-ansible-oracle
    collabn1: /media/sf_12cR1 => E:/racattack-ansible-oracle/12cR1
==> collabn1: Detected mount owner ID within mount options. (uid: 54320 guestpath: /media/sf_12cR1)
==> collabn1: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/sf_12cR1)
    collabn1: /media/stagefiles => E:/racattack-ansible-oracle/stagefiles
==> collabn1: Detected mount owner ID within mount options. (uid: 1000 guestpath: /media/stagefiles)
==> collabn1: Detected mount group ID within mount options. (gid: 54321 guestpath: /media/stagefiles)
==> collabn1: Running provisioner: shell...
    collabn1: Running: inline script
==> collabn1: overwriting /etc/resolv.conf
==> collabn1: Running provisioner: shell...
    collabn1: Running: inline script
==> collabn1: wrote key file "/etc/rndc.key"
==> collabn1: Stopping named:
==> collabn1: [  OK  ]
==> collabn1: Starting named:
==> collabn1: [  OK  ]
==> collabn1: successfully completed named steps

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant status

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Current machine states:

collabn2                  running (virtualbox)
collabn1                  running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vboxmanage list runningvms
"hashicorp_default_1490531708969_67077" {ab780940-aeef-4e4c-a868-6b5c6f81af2b}
"collabn2.1703260556" {71023f40-8635-4664-8c6e-730a1bfbe0e1}
"collabn1.1703260604" {d20095ef-e5ed-4554-96e0-0168125b3dd8}

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant ssh collabn1

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Last login: Sun Mar 26 13:20:43 2017 from 10.0.2.2
[vagrant@collabn1 ~]$ ls -al
total 32
drwx------  3 vagrant vagrant 4096 Mar 26 13:10 .
drwxr-xr-x. 5 root    root    4096 Aug  4  2015 ..
-rw-------  1 vagrant vagrant  139 Mar 26 13:22 .bash_history
-rw-r--r--  1 vagrant vagrant   18 May  7  2015 .bash_logout
-rw-r--r--  1 vagrant vagrant  176 May  7  2015 .bash_profile
-rw-r--r--  1 vagrant vagrant  124 May  7  2015 .bashrc
-rw-r--r--  1 vagrant vagrant  121 Dec 20  2012 .kshrc
drwx------  2 vagrant vagrant 4096 Aug  4  2015 .ssh

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Don't know password and not same as username.
[vagrant@collabn1 ~]$ su - oracle
Password:
su: incorrect password
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

[vagrant@collabn1 ~]$ sudo su - oracle
[oracle@collabn1 ~]$ exit
logout
[vagrant@collabn1 ~]$ sudo su -
[root@collabn1 ~]# cat /etc/passwd | column -t -s :
root       x  0      0      root                          /root                /bin/bash
bin        x  1      1      bin                           /bin                 /sbin/nologin
daemon     x  2      2      daemon                        /sbin                /sbin/nologin
adm        x  3      4      adm                           /var/adm             /sbin/nologin
lp         x  4      7      lp                            /var/spool/lpd       /sbin/nologin
sync       x  5      0      sync                          /sbin                /bin/sync
shutdown   x  6      0      shutdown                      /sbin                /sbin/shutdown
halt       x  7      0      halt                          /sbin                /sbin/halt
mail       x  8      12     mail                          /var/spool/mail      /sbin/nologin
uucp       x  10     14     uucp                          /var/spool/uucp      /sbin/nologin
operator   x  11     0      operator                      /root                /sbin/nologin
games      x  12     100    games                         /usr/games           /sbin/nologin
gopher     x  13     30     gopher                        /var/gopher          /sbin/nologin
ftp        x  14     50     FTP User                      /var/ftp             /sbin/nologin
nobody     x  99     99     Nobody                        /                    /sbin/nologin
vcsa       x  69     69     virtual console memory owner  /dev                 /sbin/nologin
rpc        x  32     32     Rpcbind Daemon                /var/cache/rpcbind   /sbin/nologin
rpcuser    x  29     29     RPC Service User              /var/lib/nfs         /sbin/nologin
nfsnobody  x  65534  65534  Anonymous NFS User            /var/lib/nfs         /sbin/nologin
saslauth   x  499    76     "Saslauthd user"              /var/empty/saslauth  /sbin/nologin
postfix    x  89     89     /var/spool/postfix            /sbin/nologin
sshd       x  74     74     Privilege-separated SSH       /var/empty/sshd      /sbin/nologin
named      x  25     25     Named                         /var/named           /sbin/nologin
dbus       x  81     81     System message bus            /                    /sbin/nologin
oracle     x  54321  54321  /home/oracle                  /bin/bash
applmgr    x  54322  54321  /home/applmgr                 /bin/bash
puppet     x  52     52     Puppet                        /var/lib/puppet      /sbin/nologin
vboxadd    x  498    1      /var/run/vboxadd              /bin/false
vagrant    x  1000   1000   /home/vagrant                 /bin/bash
[root@collabn1 ~]# df -h
Filesystem        Size  Used Avail Use% Mounted on
/dev/sda3          27G  2.6G   23G  11% /
tmpfs             1.5G     0  1.5G   0% /dev/shm
/dev/sda1         485M   93M  367M  21% /boot
/dev/sdb1          50G  180M   48G   1% /u01
vagrant           466G  370G   97G  80% /vagrant
media_sf_12cR1    466G  370G   97G  80% /media/sf_12cR1
media_stagefiles  466G  370G   97G  80% /media/stagefiles
[root@collabn1 ~]# cd /media/sf_12cR1/
[root@collabn1 sf_12cR1]# ls -l
total 4962982
-rwxrwxrwx 1 54320 oinstall          0 Mar 26 12:45 keep
-rwxrwxrwx 1 54320 oinstall 1673544724 Mar 25 20:20 linuxamd64_12102_database_1of2.zip
-rwxrwxrwx 1 54320 oinstall 1014530602 Mar 25 20:32 linuxamd64_12102_database_2of2.zip
-rwxrwxrwx 1 54320 oinstall 1747043545 Mar 25 20:44 linuxamd64_12102_grid_1of2.zip
-rwxrwxrwx 1 54320 oinstall  646972897 Mar 25 20:42 linuxamd64_12102_grid_2of2.zip
-rwxrwxrwx 1 54320 oinstall        181 Mar 26 12:45 readme.txt
[root@collabn1 sf_12cR1]#

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant ssh collabn2

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave
Last login: Sun Mar 26 13:10:38 2017 from 10.0.2.2
[vagrant@collabn2 ~]$ ls -al
total 32
drwx------  3 vagrant vagrant 4096 Mar 26 13:14 .
drwxr-xr-x. 5 root    root    4096 Aug  4  2015 ..
-rw-------  1 vagrant vagrant   56 Mar 26 13:14 .bash_history
-rw-r--r--  1 vagrant vagrant   18 May  7  2015 .bash_logout
-rw-r--r--  1 vagrant vagrant  176 May  7  2015 .bash_profile
-rw-r--r--  1 vagrant vagrant  124 May  7  2015 .bashrc
-rw-r--r--  1 vagrant vagrant  121 Dec 20  2012 .kshrc
drwx------  2 vagrant vagrant 4096 Aug  4  2015 .ssh
[vagrant@collabn2 ~]$ sudo su -
[root@collabn2 ~]# cat /etc/passwd | column -t -s :
root       x  0      0      root                          /root                /bin/bash
bin        x  1      1      bin                           /bin                 /sbin/nologin
daemon     x  2      2      daemon                        /sbin                /sbin/nologin
adm        x  3      4      adm                           /var/adm             /sbin/nologin
lp         x  4      7      lp                            /var/spool/lpd       /sbin/nologin
sync       x  5      0      sync                          /sbin                /bin/sync
shutdown   x  6      0      shutdown                      /sbin                /sbin/shutdown
halt       x  7      0      halt                          /sbin                /sbin/halt
mail       x  8      12     mail                          /var/spool/mail      /sbin/nologin
uucp       x  10     14     uucp                          /var/spool/uucp      /sbin/nologin
operator   x  11     0      operator                      /root                /sbin/nologin
games      x  12     100    games                         /usr/games           /sbin/nologin
gopher     x  13     30     gopher                        /var/gopher          /sbin/nologin
ftp        x  14     50     FTP User                      /var/ftp             /sbin/nologin
nobody     x  99     99     Nobody                        /                    /sbin/nologin
vcsa       x  69     69     virtual console memory owner  /dev                 /sbin/nologin
rpc        x  32     32     Rpcbind Daemon                /var/cache/rpcbind   /sbin/nologin
rpcuser    x  29     29     RPC Service User              /var/lib/nfs         /sbin/nologin
nfsnobody  x  65534  65534  Anonymous NFS User            /var/lib/nfs         /sbin/nologin
saslauth   x  499    76     "Saslauthd user"              /var/empty/saslauth  /sbin/nologin
postfix    x  89     89     /var/spool/postfix            /sbin/nologin
sshd       x  74     74     Privilege-separated SSH       /var/empty/sshd      /sbin/nologin
named      x  25     25     Named                         /var/named           /sbin/nologin
dbus       x  81     81     System message bus            /                    /sbin/nologin
oracle     x  54321  54321  /home/oracle                  /bin/bash
applmgr    x  54322  54321  /home/applmgr                 /bin/bash
puppet     x  52     52     Puppet                        /var/lib/puppet      /sbin/nologin
vboxadd    x  498    1      /var/run/vboxadd              /bin/false
vagrant    x  1000   1000   /home/vagrant                 /bin/bash
[root@collabn2 ~]# df -h
Filesystem        Size  Used Avail Use% Mounted on
/dev/sda3          27G  2.6G   23G  11% /
tmpfs             1.5G     0  1.5G   0% /dev/shm
/dev/sda1         485M   93M  367M  21% /boot
/dev/sdb1          50G  180M   48G   1% /u01
vagrant           466G  370G   97G  80% /vagrant
media_sf_12cR1    466G  370G   97G  80% /media/sf_12cR1
media_stagefiles  466G  370G   97G  80% /media/stagefiles
[root@collabn2 ~]# cd /media/sf_12cR1/
[root@collabn2 sf_12cR1]# ls -l
total 4962982
-rwxrwxrwx 1 54320 oinstall          0 Mar 26 12:45 keep
-rwxrwxrwx 1 54320 oinstall 1673544724 Mar 25 20:20 linuxamd64_12102_database_1of2.zip
-rwxrwxrwx 1 54320 oinstall 1014530602 Mar 25 20:32 linuxamd64_12102_database_2of2.zip
-rwxrwxrwx 1 54320 oinstall 1747043545 Mar 25 20:44 linuxamd64_12102_grid_1of2.zip
-rwxrwxrwx 1 54320 oinstall  646972897 Mar 25 20:42 linuxamd64_12102_grid_2of2.zip
-rwxrwxrwx 1 54320 oinstall        181 Mar 26 12:45 readme.txt
[root@collabn2 sf_12cR1]#

======================================================================

falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$ vagrant ssh-config

collabn2 eth1 lanip  :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip  :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master
Host collabn2
  HostName 127.0.0.1
  User vagrant
  Port 2200
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile C:/Users/falcon/.vagrant.d/insecure_private_key
  IdentitiesOnly yes
  LogLevel FATAL

Host collabn1
  HostName 127.0.0.1
  User vagrant
  Port 2201
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile C:/Users/falcon/.vagrant.d/insecure_private_key
  IdentitiesOnly yes
  LogLevel FATAL


falcon@falconidae MINGW64 /e/racattack-ansible-oracle (master)
$

Using username "vagrant".
Authenticating with public key "imported-openssh-key"
Last login: Sun Mar 26 14:00:36 2017 from 10.0.2.2
[vagrant@collabn1 ~]$

Using username "vagrant".
Authenticating with public key "imported-openssh-key"
Last login: Sun Mar 26 14:01:14 2017 from 10.0.2.2
[vagrant@collabn2 ~]$

February 9, 2017

Steps to Recreate Central Inventory in Real Applications Clusters (Doc ID 413939.1)

Filed under: 12c,RAC — mdinh @ 3:13 am

$ echo $ORACLE_HOME

/u01/app/oracle/product/12.1.0/db_1

$ $ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME

Oracle Interim Patch Installer version 12.1.0.1.3
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/db_1/oraInst.loc
OPatch version    : 12.1.0.1.3
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/opatch/opatch2017-02-08_15-56-03PM_1.log

List of Homes on this system:

Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
   Oracle Home dir. path does not exist in Central Inventory
   Oracle Home is a symbolic link
   Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73

This happened due to error during install. – oraInventory mismatch.

$ cat /etc/oraInst.loc
inst_group=oinstall
inventory_loc=/u01/app/oraInventory

$ cd /u01/software/database
$ export DISTRIB=`pwd`
$ ./runInstaller -silent -showProgress -waitforcompletion -force -ignorePrereq -responseFile $DISTRIB/response/db_install.rsp \
> oracle.install.option=INSTALL_DB_SWONLY \
> UNIX_GROUP_NAME=oinstall \
> INVENTORY_LOCATION=/u01/app/oracle/oraInventory \

Backup oraInventory for both nodes and attachHome

$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -silent -ignoreSysPrereqs -attachHome \
ORACLE_HOME="/u02/app/12.1.0/grid" ORACLE_HOME_NAME="OraGI12Home1" \
LOCAL_NODE="node01" CLUSTER_NODES="{node01,node02}" CRS=true
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

$ ./runInstaller -silent -ignoreSysPrereqs -attachHome \
ORACLE_HOME="/u01/app/oracle/product/12.1.0/db_1" ORACLE_HOME_NAME="OraDB12Home1" \
LOCAL_NODE="node01" CLUSTER_NODES="{node01,node02}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16383 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

February 8, 2017

runcluvfy.sh -pre crsinst NTP failed PRVF-07590 PRVG-01017

Filed under: 12c,RAC — mdinh @ 12:56 pm

12c (12.1.0.2.0) RAC Oracle Linux Server release 7.3
/u01/software/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose

Starting Clock synchronization checks using Network Time Protocol(NTP)...

Checking existence of NTP configuration file "/etc/ntp.conf" across nodes
  Node Name                             File exists?            
  ------------------------------------  ------------------------
  node02                                yes                     
  node01                                yes                     
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?                
  ------------------------------------  ------------------------
  node02                                no                      
  node01                                yes                     
PRVF-7590 : "ntpd" is not running on node "node02"
PRVG-1017 : NTP configuration file is present on nodes "node02" on which NTP daemon or service was not running
Result: Clock synchronization check using Network Time Protocol(NTP) failed

NTP was indeed running on both nodes.
The issue is /var/run/ntpd.pid does not exist on the failed node.
NTP was started with incorrect options.

GOOD:

# cat /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# systemctl status ntpd.service
ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-02-03 20:37:18 CST; 3 days ago
 Main PID: 22517 (ntpd)
   CGroup: /system.slice/ntpd.service
           /usr/sbin/ntpd -u ntp:ntp -x -u ntp:ntp -p /var/run/ntpd.pid

# ll /var/run/ntpd.*
-rw-r--r-- 1 root root 5 Feb  3 20:37 /var/run/ntpd.pid

BAD:

# cat /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# systemctl status ntpd.service
ntpd.service - Network Time Service           
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-02-03 18:10:23 CST; 3 days ago
 Main PID: 22403 (ntpd)
   CGroup: /system.slice/ntpd.service
           /usr/sbin/ntpd -u ntp:ntp -g           

# ll /var/run/ntpd.*
ls: cannot access /var/run/ntpd.*: No such file or directory

SOLUTION:

Restart ntpd on failed node.

February 5, 2017

12c Database spfile Parameter alias is not created in ASM Diskgroup (Doc ID 1950769.1)

Filed under: 12c,RAC — mdinh @ 8:41 pm

This is new as of 12.1.0.2.

$ srvctl config database -d hawk
Database unique name: hawk
Database name: hawk
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/HAWK/PARAMETERFILE/spfile.264.934897017
Password file: +DATA/hawk/orapwhawk
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: 
Database instances: hawk1,hawk2
Configured nodes: hawk01,hawk02
Database is administrator managed

Alias will need to be created manually.

SQL> ALTER DISKGROUP dg1 ADD ALIAS '+DG1/rac12c/spfilerac12c.ora'  FOR  '+dg1/rac12c/parameterfile/spfile.271.860077229'; 

WARNING:
I did not create alias and was curious why alias was not created. Now I know.

What I did: echo “SPFILE=’+DATA/HAWK/PARAMETERFILE/spfile.264.934897017” > $ORACLE_HOME/dbs/init$ORACLE_SID.ora

Setting SPFILE Parameter Values for Oracle RAC
http://docs.oracle.com/database/121/RACAD/admin.htm#RACAD815

October 22, 2016

Oracle Health Check

Filed under: 11g,oracle,RAC — mdinh @ 12:44 pm

Currently, I am working on health check for ODA and find there are too many tools with disparate information.

I am sure there are more than the ones listed below and stopped searching.

ODA Oracle Database Appliance orachk Healthcheck (Doc ID 2126926.1)
Multiplexing Redolog and Control File on ODA (Doc ID 2086289.1)

ORAchk – Health Checks for the Oracle Stack (Doc ID 1268927.2)
How to Perform a Health Check on the Database (Doc ID 122669.1)
Health Monitor (Doc ID 466920.1)

Oracle Configuration Manager Quick Start Guide (Doc ID 728988.5)
Pre-12+ OCM Collectors to Be Decommissioned Summer of 2015 (Doc ID 1986521.1)

cluvfy comp healthcheck

One example found:  ORAchk will report if less than 3 SCANs configured while cluvfy comp healthcheck (11.2) does not.

Intesteresting side track: < 3 not escaped is   ❤

Complete cluvfy comp healthcheck  results plus how to create database user CVUSYS (WARNING: ~1600 lines).

Some failures from cluvfy comp healthcheck.

******************************************************************************************
Database recommendation checks for "emu"
******************************************************************************************

Verification Check        :  DB Log Mode
Verification Description  :  Checks the database log archiving mode
Verification Result       :  NOT MET
Verification Summary      :  Check for DB Log Mode failed
Additional Details        :  If the database is in log archiving mode, then it is
                             always desirable and advisable to upgrade the database in
                             noarchivelog mode as that will reduce the time taken to
                             upgrade the database. After the upgrade, the database can
                             be reverted to the archivelog mode.
References (URLs/Notes)   :  https://support.oracle.com/CSP/main/article?cmd=show&type=N
                             OT&id=429825.1

Database(Instance)  Status    Expected Value                Actual Value
------------------------------------------------------------------------------------------

emu                 FAILED    db_log_mode = NOARCHIVELOG    db_log_mode = ARCHIVELOG

__________________________________________________________________________________________

Database(Instance)  Error details
------------------------------------------------------------------------------------------

emu                 Error - NOARCHIVELOG mode is recommended when upgrading
                    Cause - Cause Of Problem Not Available
                    Action - User Action Not Available
__________________________________________________________________________________________

Verification Check        :  Users Granted CONNECT Role
Verification Description  :  Checks for the presence of any users with CONNECT role
Verification Result       :  NOT MET
Verification Summary      :  Check for Users Granted CONNECT Role failed

Database(Instance)  Status    Expected Value                Actual Value
------------------------------------------------------------------------------------------

emu                 FAILED    connect_role_grantees = 0     connect_role_grantees = 5

__________________________________________________________________________________________

Database(Instance)  Error details
------------------------------------------------------------------------------------------

emu                 Error - CONNECT role granted users found
                    Cause - Cause Of Problem Not Available
                    Action - User Action Not Available
__________________________________________________________________________________________

Does Oracle itself needs a health check?

October 9, 2016

cluvfy is your friend

Filed under: RAC — mdinh @ 11:54 pm

Just a reminder to self to use cluvfy

olsnodes -i -n -s -t
grep 'master node' $CRS_HOME/log/`hostname -s`/cssd/ocssd.*|tail -1

cluvfy stage -pre help
cluvfy stage -post  help

++++++++++


[grid@rac01:+ASM1:/home/grid]
$ olsnodes -i -n -s -t
rac01   1       rac01-vip       Active  Unpinned
rac02   2       rac02-vip       Active  Unpinned

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ env|grep HOME
CRS_HOME=/u01/app/11.2.0.4/grid
HOME=/home/grid
XAG_HOME=/u01/app/grid/xag
ORACLE_HOME=/u01/app/11.2.0.4/grid

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ grep 'master node' $CRS_HOME/log/`hostname -s`/cssd/ocssd.*|tail -1
/u01/app/11.2.0.4/grid/log/rac01/cssd/ocssd.log:2016-10-09 10:48:55.837: 
[    CSSD][28161792]clssgmCMReconfig: reconfiguration successful, 
incarnation 371471500 with 2 nodes, local node number 1, master node number 1

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -pre help

ERROR:
Unexpected symbol "help". See usage for detail.

USAGE:
cluvfy stage {-pre|-post}    [-verbose]

SYNTAX (for Stages):
cluvfy stage -pre cfs -n  -s  [-verbose]
cluvfy stage -pre
                   crsinst -file  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -upgrade [-n ] [-rolling] -src_crshome  -dest_crshome 
                           -dest_version  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -n  [-r {10gR1|10gR2|11gR1|11gR2}]
                           [-c ] [-q ]
                           [-osdba ] [-orainv ]
                           [-asm [-asmgrp ] [-asmdev ]] [-crshome ]
                           [-fixup [-fixupdir ]] [-networks ]
                           [-verbose]
cluvfy stage -pre acfscfg -n  [-asmdev ] [-verbose]
cluvfy stage -pre
                   dbinst -n  [-r {10gR1|10gR2|11gR1|11gR2}] [-osdba ] [-d ]
                          [-fixup [-fixupdir ]] [-verbose]
                   dbinst -upgrade -src_dbhome  [-dbname ] -dest_dbhome  -dest_version 
                          [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre dbcfg -n  -d  [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre hacfg [-osdba ] [-orainv ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre nodeadd -n  [-vip ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -post hwos -n  [-s ] [-verbose]
cluvfy stage -post cfs -n  -f  [-verbose]
cluvfy stage -post crsinst -n  [-verbose]
cluvfy stage -post acfscfg -n  [-verbose]
cluvfy stage -post hacfg [-verbose]
cluvfy stage -post nodeadd -n  [-verbose]
cluvfy stage -post nodedel -n  [-verbose]

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -post  help

ERROR:
Unexpected symbol "help". See usage for detail.

USAGE:
cluvfy stage {-pre|-post}    [-verbose]

SYNTAX (for Stages):
cluvfy stage -pre cfs -n  -s  [-verbose]
cluvfy stage -pre
                   crsinst -file  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -upgrade [-n ] [-rolling] -src_crshome  -dest_crshome 
                           -dest_version  [-fixup [-fixupdir ]] [-verbose]
                   crsinst -n  [-r {10gR1|10gR2|11gR1|11gR2}]
                           [-c ] [-q ]
                           [-osdba ] [-orainv ]
                           [-asm [-asmgrp ] [-asmdev ]] [-crshome ]
                           [-fixup [-fixupdir ]] [-networks ]
                           [-verbose]
cluvfy stage -pre acfscfg -n  [-asmdev ] [-verbose]
cluvfy stage -pre
                   dbinst -n  [-r {10gR1|10gR2|11gR1|11gR2}] [-osdba ] [-d ]
                          [-fixup [-fixupdir ]] [-verbose]
                   dbinst -upgrade -src_dbhome  [-dbname ] -dest_dbhome  -dest_version 
                          [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre dbcfg -n  -d  [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre hacfg [-osdba ] [-orainv ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -pre nodeadd -n  [-vip ] [-fixup [-fixupdir ]] [-verbose]
cluvfy stage -post hwos -n  [-s ] [-verbose]
cluvfy stage -post cfs -n  -f  [-verbose]
cluvfy stage -post crsinst -n  [-verbose]
cluvfy stage -post acfscfg -n  [-verbose]
cluvfy stage -post hacfg [-verbose]
cluvfy stage -post nodeadd -n  [-verbose]
cluvfy stage -post nodedel -n  [-verbose]

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -pre crsinst -n rac01,rac02 -fixup

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "rac01"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth2"
Node connectivity passed for interface "eth2"
TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac02:/u01/app/11.2.0.4/grid,rac02:/tmp"
Free disk space check passed for "rac01:/u01/app/11.2.0.4/grid,rac01:/tmp"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed
Default user file creation mask check failed
Check failed on nodes:
        rac02,rac01
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes

Time zone consistency check passed

Starting check for Reverse path filter setting ...

Check for Reverse path filter setting passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

++++++++++

[grid@rac01:+ASM1:/home/grid]
$ umask
0002
[grid@rac01:+ASM1:/home/grid]
$ ssh rac02 "umask"
0022
[grid@rac0

+++++++++

[grid@rac01:+ASM1:/home/grid]
$ cluvfy stage -post hwos -n rac01,rac02

Performing post-checks for hardware and operating system setup

Checking node reachability...
Node reachability check passed from node "rac01"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth2"
Node connectivity passed for interface "eth2"
TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed

Checking shared storage accessibility...

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sde                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdd                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdg                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdh                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdi                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdf                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdb                              rac02 rac01

  Disk                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /dev/sdc                              rac02 rac01

  ACFS                                  Sharing Nodes (2 in count)
  ------------------------------------  ------------------------
  /acfsmount                            rac02 rac01


Shared storage check was successful on nodes "rac02,rac01"

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Post-check for hardware and operating system setup was successful.
[grid@rac01:+ASM1:/home/grid]
$

January 26, 2016

Unsolved Case for Missing archived_log Backup

Filed under: 11g,oracle,RAC,RMAN — mdinh @ 11:00 pm

The project was to migrate database from one DC to another.

The decision we made was to perform RMAN KEEP backup so it does not interfere with existing retention policy.

Backup also resides in its own separate directory for easier checksum and transfer.

This is for 4 nodes RAC environment and backup was taken from node1 at 2016-JAN-21 14:12:10

RMAN backup scripts.

run {
ALLOCATE CHANNEL C1 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
ALLOCATE CHANNEL C2 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
ALLOCATE CHANNEL C3 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
ALLOCATE CHANNEL C4 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
ALLOCATE CHANNEL C5 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/%d_%I_%T_%U_MIGRATION_%s' MAXPIECESIZE 4G MAXOPENFILES 1;
SQL 'ALTER SYSTEM CHECKPOINT';

BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 0 DATABASE FILESPERSET 1 
KEEP UNTIL TIME 'ADD_MONTHS(SYSDATE,1)' TAG='MIGRATION_KEEP';

BACKUP AS COMPRESSED BACKUPSET ARCHIVELOG from time 'trunc(sysdate)' FILESPERSET 2 
KEEP UNTIL TIME 'ADD_MONTHS(SYSDATE,1)' TAG='MIGRATION_KEEP';
}
run {
ALLOCATE CHANNEL C6 DEVICE TYPE DISK FORMAT '/oracle/FRA/migration_backup/CTL_%d_%I_%T_%U_MIGRATION_%s';
BACKUP AS COMPRESSED BACKUPSET CURRENT CONTROLFILE KEEP UNTIL TIME 'ADD_MONTHS(SYSDATE,1)' TAG='MIGRATION_KEEP';
}
LIST BACKUP OF DATABASE SUMMARY TAG='MIGRATION_KEEP';
LIST BACKUP OF ARCHIVELOG ALL SUMMARY TAG='MIGRATION_KEEP';
LIST BACKUP OF CONTROLFILE TAG='MIGRATION_KEEP';
REPORT SCHEMA;

When recovering database, we encountered the error below.

ERROR from database recovery

RMAN-06025: no backup of archived log for thread 1 with sequence 287407 and starting SCN of 198452997924 found to restore

According to gv$archived_log, the sequence has not been deleted.

SQL> select inst_id, thread#, sequence#, completion_time, status, deleted
from gv$archived_log
where thread#=1 and sequence# between 287406 and 287408
order by 1,2,3
;

  2    3    4    5  
   INST_ID    THREAD#  SEQUENCE# COMPLETION_TIME      S DEL
---------- ---------- ---------- -------------------- - ---
	 1	    1	  287406 2016-JAN-21 18:51:29 A NO
	 1	    1	  287407 2016-JAN-21 18:59:45 A NO
	 1	    1	  287408 2016-JAN-21 19:00:08 A NO
	 2	    1	  287406 2016-JAN-21 18:51:29 A NO
	 2	    1	  287407 2016-JAN-21 18:59:45 A NO
	 2	    1	  287408 2016-JAN-21 19:00:08 A NO
	 3	    1	  287406 2016-JAN-21 18:51:29 A NO
	 3	    1	  287407 2016-JAN-21 18:59:45 A NO
	 3	    1	  287408 2016-JAN-21 19:00:08 A NO
	 4	    1	  287406 2016-JAN-21 18:51:29 A NO
	 4	    1	  287407 2016-JAN-21 18:59:45 A NO
	 4	    1	  287408 2016-JAN-21 19:00:08 A NO

12 rows selected.

SQL> SQL> 

Backup was started at 2016-JAN-21 14:12:10.

Noticed sequence 287407 thread 1 was missing from the MIGRATION_KEEP backup.

RMAN> list backup of archivelog sequence 287406 thread 1 summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233366  B  A  A DISK        2016-JAN-21 18:59:30 1       1       YES        MIGRATION_KEEP
233374  B  A  A DISK        2016-JAN-21 19:23:41 1       1       YES        ARC021THU1923

RMAN> list backup of archivelog sequence 287407 thread 1 summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233375  B  A  A DISK        2016-JAN-21 19:23:46 1       1       YES        ARC021THU1923

RMAN> list backup of archivelog sequence 287408 thread 1 summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233372  B  A  A DISK        2016-JAN-21 19:00:16 1       1       YES        MIGRATION_KEEP
233377  B  A  A DISK        2016-JAN-21 19:23:47 1       1       YES        ARC021THU1923


RMAN> list backup summary tag MIGRATION_KEEP;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233092  B  0  A DISK        2016-JAN-21 14:12:10 2       1       YES        MIGRATION_KEEP
233093  B  0  A DISK        2016-JAN-21 14:12:19 2       1       YES        MIGRATION_KEEP

233306  B  0  A DISK        2016-JAN-21 18:48:31 1       1       YES        MIGRATION_KEEP
233307  B  0  A DISK        2016-JAN-21 18:48:32 1       1       YES        MIGRATION_KEEP

233308  B  F  A DISK        2016-JAN-21 18:48:37 1       1       YES        MIGRATION_KEEP
233309  B  A  A DISK        2016-JAN-21 18:50:20 1       1       YES        MIGRATION_KEEP
233310  B  A  A DISK        2016-JAN-21 18:50:47 1       1       YES        MIGRATION_KEEP
233311  B  A  A DISK        2016-JAN-21 18:50:48 1       1       YES        MIGRATION_KEEP
233312  B  A  A DISK        2016-JAN-21 18:50:54 1       1       YES        MIGRATION_KEEP
233313  B  A  A DISK        2016-JAN-21 18:50:58 1       1       YES        MIGRATION_KEEP
233314  B  F  A DISK        2016-JAN-21 18:51:12 1       1       YES        MIGRATION_KEEP
233315  B  A  A DISK        2016-JAN-21 18:52:00 1       1       YES        MIGRATION_KEEP

233366  B  A  A DISK        2016-JAN-21 18:59:30 1       1       YES        MIGRATION_KEEP
233367  B  A  A DISK        2016-JAN-21 18:59:32 1       1       YES        MIGRATION_KEEP
233368  B  A  A DISK        2016-JAN-21 18:59:32 1       1       YES        MIGRATION_KEEP
233369  B  A  A DISK        2016-JAN-21 18:59:35 1       1       YES        MIGRATION_KEEP
233370  B  F  A DISK        2016-JAN-21 18:59:54 1       1       YES        MIGRATION_KEEP
233371  B  F  A DISK        2016-JAN-21 19:00:04 1       1       YES        MIGRATION_KEEP
233372  B  A  A DISK        2016-JAN-21 19:00:16 1       1       YES        MIGRATION_KEEP
233373  B  F  A DISK        2016-JAN-21 19:00:22 1       1       YES        MIGRATION_KEEP

RMAN> list backup of controlfile summary tag MIGRATION_KEEP;

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233314  B  F  A DISK        2016-JAN-21 18:51:12 1       1       YES        MIGRATION_KEEP
233370  B  F  A DISK        2016-JAN-21 18:59:54 1       1       YES        MIGRATION_KEEP
233373  B  F  A DISK        2016-JAN-21 19:00:22 1       1       YES        MIGRATION_KEEP --- This CF was restored.

RMAN> 
RMAN> restore controlfile from '/rman_bkp/FRA/migration_backup/CTL_3036635614_20160121_m6qrusa4_1_1_MIGRATION_235206';
RMAN> list backup of archivelog all summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
233309  B  A  A DISK        21-JAN-2016 18:50:20 1       1       YES        MIGRATION_KEEP

233365  B  A  A DISK        21-JAN-2016 18:59:29 1       1       YES        MIGRATION_KEEP
233366  B  A  A DISK        21-JAN-2016 18:59:30 1       1       YES        MIGRATION_KEEP
233367  B  A  A DISK        21-JAN-2016 18:59:32 1       1       YES        MIGRATION_KEEP
233368  B  A  A DISK        21-JAN-2016 18:59:32 1       1       YES        MIGRATION_KEEP
233369  B  A  A DISK        21-JAN-2016 18:59:35 1       1       YES        MIGRATION_KEEP
233372  B  A  A DISK        21-JAN-2016 19:00:16 1       1       YES        MIGRATION_KEEP

RMAN> list backupset 233372;


List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time     
------- ---------- ----------- ------------ --------------------
233372  35.84M     DISK        00:00:04     21-JAN-2016 19:00:16
        BP Key: 359665   Status: AVAILABLE  Compressed: YES  Tag: MIGRATION_KEEP
        Piece Name: /rman_bkp/FRA/migration_backup/CTL_3036635614_20160121_m5qrus9s_1_1_MIGRATION_235205
        Keep: BACKUP_LOGS        Until: 21-FEB-2016 19:00:12

  List of Archived Logs in backup set 233372
  Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
  ---- ------- ---------- -------------------- ---------- ---------
  1    287408  198453187859 21-JAN-2016 18:59:44 198453194240 21-JAN-2016 19:00:08
  2    207046  198452998035 21-JAN-2016 18:51:29 198453187879 21-JAN-2016 18:59:44
  2    207047  198453187879 21-JAN-2016 18:59:44 198453193569 21-JAN-2016 19:00:05
  3    182524  198452999167 21-JAN-2016 18:51:31 198453188295 21-JAN-2016 18:59:47
  3    182525  198453188295 21-JAN-2016 18:59:47 198453194175 21-JAN-2016 19:00:08
  4    75721   198452999243 21-JAN-2016 18:51:32 198453188286 21-JAN-2016 18:59:47
  4    75722   198453188286 21-JAN-2016 18:59:47 198453194112 21-JAN-2016 19:00:08

RMAN> 

Even from the log file sequence 287407 is missing.

channel C4: backup set complete, elapsed time: 00:00:30
channel C4: starting compressed archived log backup set
channel C4: specifying archived log(s) in backup set
input archived log thread=4 sequence=75720 RECID=709008 STAMP=901738292
input archived log thread=1 sequence=287406 RECID=709005 STAMP=901738289
channel C4: starting piece 1 at 2016-JAN-21 18:59:28
channel C5: finished piece 1 at 2016-JAN-21 18:59:28
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_lvqrus7u_1_1_MIGRATION_235199 tag=MIGRATION_KEEP comment=NONE
channel C5: backup set complete, elapsed time: 00:00:13
channel C5: starting compressed archived log backup set
channel C5: specifying archived log(s) in backup set
input archived log thread=2 sequence=207045 RECID=709006 STAMP=901738289
channel C5: starting piece 1 at 2016-JAN-21 18:59:29
channel C3: finished piece 1 at 2016-JAN-21 18:59:30
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_luqrus7p_1_1_MIGRATION_235198 tag=MIGRATION_KEEP comment=NONE
channel C3: backup set complete, elapsed time: 00:00:20
channel C4: finished piece 1 at 2016-JAN-21 18:59:32
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_m1qrus8g_1_1_MIGRATION_235201 tag=MIGRATION_KEEP comment=NONE
channel C4: backup set complete, elapsed time: 00:00:04
channel C5: finished piece 1 at 2016-JAN-21 18:59:32
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_m2qrus8g_1_1_MIGRATION_235202 tag=MIGRATION_KEEP comment=NONE
channel C5: backup set complete, elapsed time: 00:00:03
channel C1: finished piece 1 at 2016-JAN-21 18:59:36
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_ltqrus7p_1_1_MIGRATION_235197 tag=MIGRATION_KEEP comment=NONE
channel C1: backup set complete, elapsed time: 00:00:31
channel C2: finished piece 1 at 2016-JAN-21 18:59:36
piece handle=/oracle/FRA/migration_backup/3036635614_20160121_m0qrus83_1_1_MIGRATION_235200 tag=MIGRATION_KEEP comment=NONE
channel C2: backup set complete, elapsed time: 00:00:15
Finished backup at 2016-JAN-21 18:59:36
released channel: C1
released channel: C2
released channel: C3
released channel: C4
released channel: C5
                               
allocated channel: C6
channel C6: SID=373 instance=1 device type=DISK

Starting backup at 2016-JAN-21 18:59:44
current log archived

backup will be obsolete on date 2016-FEB-21 18:59:52
archived logs required to recover from this backup will be backed up
channel C6: starting compressed full datafile backup set
channel C6: specifying datafile(s) in backup set
including current control file in backup set
channel C6: starting piece 1 at 2016-JAN-21 18:59:53
channel C6: finished piece 1 at 2016-JAN-21 19:00:04
piece handle=/oracle/FRA/migration_backup/CTL_3036635614_20160121_m3qrus98_1_1_MIGRATION_235203 tag=MIGRATION_KEEP comment=NONE
channel C6: backup set complete, elapsed time: 00:00:11

backup will be obsolete on date 2016-FEB-21 19:00:04
archived logs required to recover from this backup will be backed up
channel C6: starting compressed full datafile backup set
channel C6: specifying datafile(s) in backup set
including current SPFILE in backup set
channel C6: starting piece 1 at 2016-JAN-21 19:00:04
channel C6: finished piece 1 at 2016-JAN-21 19:00:05
piece handle=/oracle/FRA/migration_backup/CTL_3036635614_20160121_m4qrus9k_1_1_MIGRATION_235204 tag=MIGRATION_KEEP comment=NONE
channel C6: backup set complete, elapsed time: 00:00:01

backup will be obsolete on date 2016-FEB-21 19:00:04
archived logs required to recover from this backup will be backed up
channel C6: starting compressed full datafile backup set
channel C6: specifying datafile(s) in backup set
including current SPFILE in backup set
channel C6: starting piece 1 at 2016-JAN-21 19:00:04
channel C6: finished piece 1 at 2016-JAN-21 19:00:05
piece handle=/oracle/FRA/migration_backup/CTL_3036635614_20160121_m4qrus9k_1_1_MIGRATION_235204 tag=MIGRATION_KEEP comment=NONE
channel C6: backup set complete, elapsed time: 00:00:01

current log archived
backup will be obsolete on date 2016-FEB-21 19:00:12
archived logs required to recover from this backup will be backed up
channel C6: starting compressed archived log backup set
channel C6: specifying archived log(s) in backup set
input archived log thread=2 sequence=207046 RECID=709010 STAMP=901738785
input archived log thread=3 sequence=182524 RECID=709011 STAMP=901738788
input archived log thread=4 sequence=75721 RECID=709012 STAMP=901738788
input archived log thread=1 sequence=287408 RECID=709016 STAMP=901738808
input archived log thread=2 sequence=207047 RECID=709013 STAMP=901738806
input archived log thread=4 sequence=75722 RECID=709014 STAMP=901738808
input archived log thread=3 sequence=182525 RECID=709015 STAMP=901738808
channel C6: starting piece 1 at 2016-JAN-21 19:00:13
channel C6: finished piece 1 at 2016-JAN-21 19:00:20
piece handle=/oracle/FRA/migration_backup/CTL_3036635614_20160121_m5qrus9s_1_1_MIGRATION_235205 tag=MIGRATION_KEEP comment=NONE
channel C6: backup set complete, elapsed time: 00:00:07

backup will be obsolete on date 2016-FEB-21 19:00:20
archived logs required to recover from this backup will be backed up
channel C6: starting compressed full datafile backup set
channel C6: specifying datafile(s) in backup set
including current control file in backup set
channel C6: starting piece 1 at 2016-JAN-21 19:00:21
channel C6: finished piece 1 at 2016-JAN-21 19:00:31
piece handle=/oracle/FRA/migration_backup/CTL_3036635614_20160121_m6qrusa4_1_1_MIGRATION_235206 tag=MIGRATION_KEEP comment=NONE
channel C6: backup set complete, elapsed time: 00:00:10
Finished backup at 2016-JAN-21 19:00:31
released channel: C6

Any ideas as to why the archived log was missing from backup?

BTW, I have already deleted the backups to save space.

December 19, 2015

Patching with OPLAN

Filed under: 11g,oracle,PSU,RAC — mdinh @ 6:47 pm

From a time far, far way, I tweeted about Oracle Software Patching with OPLAN (Doc ID 1306814.1) and decided to give it a try.

First, you will need to configure X11 else error:
Can’t connect to X11 window server using ‘localhost:10.0’ as the value of the DISPLAY variable.

Second, you will need to using OPatch Version: 12.1.0.1.10, else error:
Caught exception: java.lang.ExceptionInInitializerError

If you like to see the results, then open and download Patch_Apply_Instructions_$PatchNumber.html from Google Drive

For some reason, opening does not work.

[grid@rac01:+ASM1:/home/grid]
$ /media/sf_Linux/patches/OPatch/oplan/oplan generateApplySteps /media/sf_Linux/patches/21744348/21523375/
from oplan /media/sf_Linux/patches/OPatch/oplan/../opatchauto-dir/opatchautocore/jlib/oracle.oplan.classpath.jar:/media/sf_Linux/patches/OPatch/oplan/../opatchauto-dir/opatchautocore/../opatchautodb/jlib/oplan_db.jar

Processing request...
Review the log messages captured in the following file: /u01/app/11.2.0.4/grid/cfgtoollogs/oplan/2015-12-19-10-32-11/log.txt
Success!

Follow the instructions outlined in the following Installation Instructions document and patch your system:

Apply Instructions (HTML) : /u01/app/11.2.0.4/grid/cfgtoollogs/oplan/2015-12-19-10-32-11/ApplyInstructions.html
Apply Instructions (TEXT) : /u01/app/11.2.0.4/grid/cfgtoollogs/oplan/2015-12-19-10-32-11/ApplyInstructions.txt

[grid@rac01:+ASM1:/u01/app/11.2.0.4/grid/cfgtoollogs/oplan]
$ cd /u01/app/11.2.0.4/grid/cfgtoollogs/oplan/2015-12-19-10-32-11/
[grid@rac01:+ASM1:/u01/app/11.2.0.4/grid/cfgtoollogs/oplan/2015-12-19-10-32-11]
$ ll
total 1284
-r--r-----. 1 grid oinstall 379154 Dec 19 10:32 ApplyInstructions.html
-r--r-----. 1 grid oinstall 8507 Dec 19 10:32 ApplyInstructions.txt
-r--r-----. 1 grid oinstall 9457 Dec 19 10:32 configuration.png
-r--r-----. 1 grid oinstall 42733 Dec 19 10:32 InplaceApplyNonRollingManual.txt
-r--r-----. 1 grid oinstall 36741 Dec 19 10:32 InplaceApplyRollingAuto.txt
-r--r-----. 1 grid oinstall 44548 Dec 19 10:32 InplaceApplyRollingManual.txt
-r--r-----. 1 grid oinstall 613286 Dec 19 10:32 log.txt
-r--r-----. 1 grid oinstall 0 Dec 19 10:32 log.txt.lck
dr-xr-x---. 3 grid oinstall 4096 Dec 19 10:32 machine-readable
-r--r-----. 1 grid oinstall 69478 Dec 19 10:32 OplaceApplyRolling.txt
-r--r-----. 1 grid oinstall 26608 Dec 19 10:32 OplaceSwitchbackRolling.txt
-r--r-----. 1 grid oinstall 353 Dec 19 10:32 README
-r--r-----. 1 grid oinstall 60991 Dec 19 10:32 README.html
[grid@rac01:+ASM1:/u01/app/11.2.0.4/grid/cfgtoollogs/oplan/2015-12-19-10-32-11]
$ 

Observation: OPLAN may need to be created after each patch is applied.

Combo of OJVM Component 11.2.0.4.5 DB PSU + GI PSU 11.2.0.4.8 (Oct2015) Patch contain the following patches:
Patch 21523375 – Database Grid Infrastructure Patch Set Update 11.2.0.4.8 (Oct2015) –> RAC-Rolling Installable
Patch 21555791 – Oracle JavaVM Component 11.2.0.4.5 Database PSU (OCT2015) –> Non RAC-Rolling Installable
Patch 19852360 – Oracle JavaVM Component 11.2.0.4.1 Database PSU – Generic JDBC Patch (OCT2014) –> RAC-Rolling Installable

Patch 19852360 is included as part of Patch 21555791 for the DATABASE.
Patch 19852360 instructions to apply patch for GRID is not available from OPLAN but is available from README.

Pet Peeve crs start/stop

Filed under: 11g,oracle,RAC — mdinh @ 2:22 pm

When stopping crs, there are 50+ outputs and is Attempting really necessary?

Conversely, when starting crs, there are only 1 output and we all know the process has not completed since crsctl stat fails.

Wouldn’t it be nice if crsctl start provides some useful information as well and prompt the the all the processes are started?

What am I missing?

[root@rac01:/root]
# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac01'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac01'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac01'
CRS-2673: Attempting to stop 'ora.emu.db' on 'rac01'
CRS-2673: Attempting to stop 'ora.dg_acfs.vg_acfs.acfs' on 'rac01'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac01'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac01'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.rac01.vip' on 'rac01'
CRS-2677: Stop of 'ora.dg_acfs.vg_acfs.acfs' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.DG_ACFS.dg' on 'rac01'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac01' succeeded
CRS-2677: Stop of 'ora.emu.db' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.DATA2.dg' on 'rac01'
CRS-2677: Stop of 'ora.scan1.vip' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac02'
CRS-2677: Stop of 'ora.rac01.vip' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.rac01.vip' on 'rac02'
CRS-2677: Stop of 'ora.DG_ACFS.dg' on 'rac01' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'rac02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac02'
CRS-2676: Start of 'ora.rac01.vip' on 'rac02' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac02' succeeded
CRS-2677: Stop of 'ora.DATA2.dg' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac01'
CRS-2677: Stop of 'ora.asm' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac01'
CRS-2677: Stop of 'ora.ons' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac01'
CRS-2677: Stop of 'ora.net1.network' on 'rac01' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac01' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac01'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac01'
CRS-2673: Attempting to stop 'ora.asm' on 'rac01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac01'
CRS-2677: Stop of 'ora.ctssd' on 'rac01' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac01' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac01' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac01' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac01'
CRS-2677: Stop of 'ora.cssd' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac01'
CRS-2677: Stop of 'ora.crf' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac01'
CRS-2677: Stop of 'ora.gipcd' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac01'
CRS-2677: Stop of 'ora.gpnpd' on 'rac01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed
CRS-4133: Oracle High Availability Services has been stopped.

[root@rac01:/root]
# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@rac01:/root]
# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager

[root@rac01:/root]
# crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
[root@rac01:/root]
#

December 18, 2015

TFA installed as part of 21523375 (Oct2015 CPU)

Filed under: 11g,oracle,RAC — mdinh @ 7:01 pm

+++ Patch 21523375 – Oracle Grid Infrastructure Patch Set Update 11.2.0.4.8 (Oct2015) (Includes Database PSU 11.2.0.4.8)
+++ Case 1: GI Home and the Database Homes that are not shared and ACFS file system is not configured.

After the completion of Patch 21523375, TFA was installed.

If you need to shutdown processes running from grid, TFA will need to be stopped as well (# /etc/init.d/init.tfa stop) since crsctl stop crs does not stop TFA.

I ***incorrectly*** tweeted that start crs starts TFA.

TFA was started from patch apply.

Interested to see what happens in the next patching cycle.

[root@rac02:/root]
# $ORACLE_HOME/OPatch/opatch auto /u01/app/grid/patches/21744348/21523375 -ocmrf /tmp/ocm.rsp
Executing /u01/app/11.2.0.4/grid/perl/bin/perl /u01/app/11.2.0.4/grid/OPatch/crs/patch11203.pl -patchdir /u01/app/grid/patches/21744348 -patchn 21523375 -ocmrf /tmp/ocm.rsp -paramfile /u01/app/11.2.0.4/grid/crs/install/crsconfig_params

This is the main log file: /u01/app/11.2.0.4/grid/cfgtoollogs/opatchauto2015-12-18_08-18-46.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/app/11.2.0.4/grid/cfgtoollogs/opatchauto2015-12-18_08-18-46.report.log

2015-12-18 08:18:46: Starting Clusterware Patch Setup
Using configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_params

Stopping RAC /u01/app/oracle/product/11.2.0.4/db_1 ...
Stopped RAC /u01/app/oracle/product/11.2.0.4/db_1 successfully

patch /u01/app/grid/patches/21744348/21523375/21352635  apply successful for home  /u01/app/oracle/product/11.2.0.4/db_1
patch /u01/app/grid/patches/21744348/21523375/21352649/custom/server/21352649  apply successful for home  /u01/app/oracle/product/11.2.0.4/db_1

Stopping CRS...
Stopped CRS successfully

patch /u01/app/grid/patches/21744348/21523375/21352635  apply successful for home  /u01/app/11.2.0.4/grid
patch /u01/app/grid/patches/21744348/21523375/21352649  apply successful for home  /u01/app/11.2.0.4/grid
patch /u01/app/grid/patches/21744348/21523375/21352642  apply successful for home  /u01/app/11.2.0.4/grid

Starting CRS...
Installing Trace File Analyzer
CRS-4123: Oracle High Availability Services has been started.

Starting RAC /u01/app/oracle/product/11.2.0.4/db_1 ...
Started RAC /u01/app/oracle/product/11.2.0.4/db_1 successfully

opatch auto succeeded.
[root@rac02:/root]
#

[root@rac02:/root]
# ps -ef|grep tfa
root     11017     1  0 08:28 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run
root     12980 12756  0 10:35 pts/0    00:00:00 grep tfa
root     26988     1  1 08:31 ?        00:01:33 /u01/app/11.2.0.4/grid/jdk/jre/bin/java -Xms128m -Xmx512m -classpath /u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/RATFA.jar:/u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/je-5.0.84.jar:/u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/ojdbc5.jar:/u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/commons-io-2.1.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0.4/grid/tfa/rac02/tfa_home

[root@rac02:/root]
# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac02'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac02'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac02' has completed
CRS-4133: Oracle High Availability Services has been stopped.

[root@rac02:/root]
# ps -ef|egrep 'oracle|grid|agent'
gdm       2968  2926  0 06:11 ?        00:00:00 /usr/libexec/polkit-gnome-authentication-agent-1
root     14811 12756  0 10:39 pts/0    00:00:00 egrep oracle|grid|agent
root     26988     1  1 08:31 ?        00:01:36 /u01/app/11.2.0.4/grid/jdk/jre/bin/java -Xms128m -Xmx512m -classpath /u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/RATFA.jar:/u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/je-5.0.84.jar:/u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/ojdbc5.jar:/u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/commons-io-2.1.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0.4/grid/tfa/rac02/tfa_home

TFA needs to be stopped manually

[root@rac02:/root]
# /etc/init.d/init.tfa stop
Stopping TFA
TFA-00002 : Oracle Trace File Analyzer (TFA) is not running
TFAmain Force Stopped Successfully
Killing TFA running with pid 26988
. . .
Successfully stopped TFA..
[root@rac02:/root]
# ps -ef|egrep 'oracle|grid|agent'
gdm       2968  2926  0 06:11 ?        00:00:00 /usr/libexec/polkit-gnome-authentication-agent-1
root     15141 12756  0 10:40 pts/0    00:00:00 egrep oracle|grid|agent
[root@rac02:/root]
# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@rac02:/root]
# ps -ef|grep -i tfa
root     11017     1  0 08:28 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run
root     16558 12756  0 10:42 pts/0    00:00:00 grep -i tfa

TFA needs to be started manually

[root@rac02:/root]
# /etc/init.d/init.tfa start
Starting TFA..
start: Job is already running: oracle-tfa
Waiting up to 100 seconds for TFA to be started..
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands
[root@rac02:/root]
# ps -ef|grep -i tfa
root     11017     1  0 08:28 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run
root     16739     1 99 10:43 ?        00:00:13 /u01/app/11.2.0.4/grid/jdk/jre/bin/java -Xms128m -Xmx512m -classpath /u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/RATFA.jar:/u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/je-5.0.84.jar:/u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/ojdbc5.jar:/u01/app/11.2.0.4/grid/tfa/rac02/tfa_home/jlib/commons-io-2.1.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0.4/grid/tfa/rac02/tfa_home
root     16938 12756  0 10:43 pts/0    00:00:00 grep -i tfa
[root@rac02:/root]

TFA does not have a status and needs to be checked using ps -ef

[root@rac02:/root]
# /etc/init.d/init.tfa
Usage: /etc/init.d/init.tfa {stop|start|shutdown|restart}
[root@rac02:/root]
#

UPDATE: Dec 19 2015

[grid@rac01:+ASM1:/home/grid]
$ echo $ORACLE_BASE
/u01/app/grid
[grid@rac01:+ASM1:/home/grid]
$ cd $ORACLE_BASE

TFA directory beneath grid $ORACLE_BASE

[grid@rac01:+ASM1:/u01/app/grid]
$ ll
total 36
drwxrwxr-x. 4 grid   oinstall 4096 Nov 30  2014 cfgtoollogs
drwxrwxr-x. 2 grid   oinstall 4096 Nov 30  2014 checkpoints
drwxrwxr-x. 2 grid   oinstall 4096 Nov 30  2014 Clusterware
drwxrwxr-x. 4 grid   oinstall 4096 Nov 30  2014 diag
drwxrwxr-x. 3 oracle oinstall 4096 Dec  4  2014 oradiag_oracle
drwxrwxr-x. 3 root   root     4096 Dec  1  2014 oradiag_root
drwxrwxr-x. 3 grid   oinstall 4096 Nov 30  2014 rac01
drwxr-x--x. 4 root   root     4096 Nov 30  2014 tfa
drwxrwxr-x. 9 grid   oinstall 4096 Dec 18  2014 xag
[grid@rac01:+ASM1:/u01/app/grid]
$ cd tfa/
[grid@rac01:+ASM1:/u01/app/grid/tfa]
$ ll
ls: cannot open directory .: Permission denied
[grid@rac01:+ASM1:/u01/app/grid/tfa]
Next Page »

Create a free website or blog at WordPress.com.