Thinking Out Loud

August 6, 2019

19c Grid Dry-Run Upgrade

Filed under: 19c,awk_sed_grep,Grid Infrastructure,upgrade — mdinh @ 12:42 pm

First test using GUI.

[oracle@racnode-dc2-1 grid]$ /u01/app/19.3.0.0/grid/gridSetup.sh -dryRunForUpgrade
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_00-20-31AM/gridSetupActions2019-08-06_00-20-31AM.log
[oracle@racnode-dc2-1 grid]$

Create dryRunForUpgradegrid.rsp from grid_2019-08-06_00-20-31AM.rsp (above GUI test)

[oracle@racnode-dc2-1 grid]$ grep -v "^#" /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp | grep -v "=$" | awk 'NF' > /home/oracle/dryRunForUpgradegrid.rsp

[oracle@racnode-dc2-1 ~]$ cat /home/oracle/dryRunForUpgradegrid.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=vbox-rac-dc2
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=racnode-dc2-1:,racnode-dc2-2:
oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=CRS
oracle.install.asm.diskGroup.AUSize=0
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=false
[oracle@racnode-dc2-1 ~]$

Create directory grid home for all nodes:

[root@racnode-dc2-1 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54318(asmdba),54322(dba),54323(backupdba),54324(oper),54325(dgdba),54326(kmdba)

[root@racnode-dc2-1 ~]# mkdir -p /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chown oracle:oinstall /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chmod 775 /u01/app/19.3.0.0/grid

[root@racnode-dc2-1 ~]# ll /u01/app/19.3.0.0/
total 4
drwxrwxr-x 2 oracle oinstall 4096 Aug  6 02:07 grid
[root@racnode-dc2-1 ~]#

Extract grid software for node1 ONLY:

[oracle@racnode-dc2-1 ~]$ unzip -qo /media/swrepo/LINUX.X64_193000_grid_home.zip -d /u01/app/19.3.0.0/grid/

[oracle@racnode-dc2-1 ~]$ ls /u01/app/19.3.0.0/grid/
addnode     clone  dbjava     diagnostics  gpnp          install        jdbc  lib      OPatch   ords  perl     qos       rhp            rootupgrade.sh  sqlpatch  tomcat  welcome.html  xdk
assistants  crs    dbs        dmu          gridSetup.sh  instantclient  jdk   md       opmn     oss   plsql    racg      root.sh        runcluvfy.sh    sqlplus   ucp     wlm
bin         css    deinstall  env.ora      has           inventory      jlib  network  oracore  oui   precomp  rdbms     root.sh.old    sdk             srvm      usm     wwg
cha         cv     demo       evm          hs            javavm         ldap  nls      ord      owm   QOpatch  relnotes  root.sh.old.1  slax            suptools  utl     xag

[oracle@racnode-dc2-1 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.0G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-1 ~]$

Run gridSetup.sh -silent -dryRunForUpgrade:

[oracle@racnode-dc2-1 ~]$ env|grep -i ora
USER=oracle
MAIL=/var/spool/mail/oracle
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
HOME=/home/oracle
LOGNAME=oracle

[oracle@racnode-dc2-1 ~]$ date
Tue Aug  6 02:35:47 CEST 2019

[oracle@racnode-dc2-1 ~]$ /u01/app/19.3.0.0/grid/gridSetup.sh -silent -dryRunForUpgrade -responseFile /home/oracle/dryRunForUpgradegrid.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_02-35-52AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log


As a root user, execute the following script(s):
        1. /u01/app/19.3.0.0/grid/rootupgrade.sh

Execute /u01/app/19.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc2-1]

Run the script on the local node.

Successfully Setup Software with warning(s).
[oracle@racnode-dc2-1 ~]$

Run rootupgrade.sh for node1 ONLY and review log:

[root@racnode-dc2-1 ~]# /u01/app/19.3.0.0/grid/rootupgrade.sh
Check /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log for the output of root script

[root@racnode-dc2-1 ~]# cat /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Performing Dry run of the Grid Infrastructure upgrade.
Using configuration parameter file: /u01/app/19.3.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/racnode-dc2-1/crsconfig/rootcrs_racnode-dc2-1_2019-08-06_02-45-31AM.log
2019/08/06 02:45:44 CLSRSC-464: Starting retrieval of the cluster configuration data
2019/08/06 02:45:52 CLSRSC-729: Checking whether CRS entities are ready for upgrade, cluster upgrade will not be attempted now. This operation may take a few minutes.
2019/08/06 02:47:56 CLSRSC-693: CRS entities validation completed successfully.
[root@racnode-dc2-1 ~]#

Check grid home for node2:

[oracle@racnode-dc2-2 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.6G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-2 ~]$

Check oraInventory for ALL nodes:

[oracle@racnode-dc2-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.7.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.2.0.1/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.2.0.1/db1" TYPE="O" IDX="2"/>
==========================================================================================
<HOME NAME="OraGI19Home1" LOC="/u01/app/19.3.0.0/grid" TYPE="O" IDX="3"/>
==========================================================================================
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc2-2 ~]$

Check crs activeversion: 12.2.0.1.0

[oracle@racnode-dc2-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc2-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [927320293].
[oracle@racnode-dc2-1 ~]$

Check log location:

[oracle@racnode-dc2-1 ~]$ cd /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/

[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$ ls -alrt
total 17420
-rw-r-----  1 oracle oinstall     129 Aug  6 02:35 installerPatchActions_2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall       0 Aug  6 02:35 gridSetupActions2019-08-06_02-35-52AM.err
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:35 temp_ob
-rw-r-----  1 oracle oinstall       0 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.err
drwxrwx--- 17 oracle oinstall    4096 Aug  6 02:39 ..
-rw-r-----  1 oracle oinstall     157 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall       0 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.err.racnode-dc2-2
-rw-r-----  1 oracle oinstall     142 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.out.racnode-dc2-2
-rw-r-----  1 oracle oinstall 9341920 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall   13419 Aug  6 02:43 time2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall 8443087 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.log
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:56 .
[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$

After dryRunForUpgrade, detach 19.3.0.0 grid home and remove directory (19.3.0.0/grid) from all nodes.

export ORACLE_HOME=/u01/app/19.3.0.0/grid
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$ORACLE_HOME
Advertisements

May 7, 2019

Remove GRID Home After Upgrade

Filed under: 12c,Grid Infrastructure,RAC — mdinh @ 9:53 pm

The environment started with a GRID 12.1.0.1 installation, upgraded to 18.3.0.0, and performed out-of-place patching (OOP) to 18.6.0.0.

As a result, there are three GRID homes and remove 12.1.0.1.

This demonstration will be for the last node from the cluster; however, the action performed will be the same for all nodes.

Review existing patch for Grid and Database homes:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/lspatches.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Notice that the GRID home is /u01/18.3.0.0/grid_2 because this was the suggestion from OOP process.
Based on experience, it might be better to name GRID home with the correct version, i.e. /u01/18.6.0.0/grid

Verify cluster state is [NORMAL]:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/crs_Query.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2056778364].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2056778364] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 29301631 29301643 29302264 ] have been applied on the local node. The release patch string is [18.6.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2056778364].
+ exit
[oracle@racnode-dc1-1 ~]$

Check Oracle Inventory:

[oracle@racnode-dc1-2 ~]$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>

### GRID home (/u01/app/12.1.0.1/grid) to be removed.
========================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
========================================================================================

<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove GRID home (/u01/app/12.1.0.1/grid). Use -local flag to avoid any bug issues.

[oracle@racnode-dc1-2 ~]$ export ORACLE_HOME=/u01/app/12.1.0.1/grid
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16040 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
[oracle@racnode-dc1-2 ~]$

Verify GRID home was removed:

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>

### GRID home (/u01/app/12.1.0.1/grid) removed.
================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1" REMOVED="T"/>
================================================================================

</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove 12.1.0.1 directory:

[oracle@racnode-dc1-2 ~]$ sudo su -
Last login: Thu May  2 23:38:22 CEST 2019
[root@racnode-dc1-2 ~]# cd /u01/app/
[root@racnode-dc1-2 app]# ll
total 12
drwxr-xr-x  3 root   oinstall 4096 Apr 17 23:36 12.1.0.1
drwxrwxr-x 12 oracle oinstall 4096 Apr 30 18:05 oracle
drwxrwx---  5 oracle oinstall 4096 May  2 23:54 oraInventory
[root@racnode-dc1-2 app]# rm -rf 12.1.0.1/
[root@racnode-dc1-2 app]#

Check the cluster:

[root@racnode-dc1-2 app]# logout
[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racnode-dc1-2 ~]$

Later, /u01/18.3.0.0/grid will be removed, too, if there are no issues with the most recent patch.

May 5, 2019

What’s My Cluster Configuration

Filed under: 18c,Grid Infrastructure,RAC — mdinh @ 2:15 pm
[grid@ol7-183-node1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[grid@ol7-183-node1 ~]$ crsctl get cluster configuration
Name                : ol7-183-cluster
Configuration       : Cluster
Class               : Standalone Cluster
Type                : flex
The cluster is not extended.
--------------------------------------------------------------------------------
        MEMBER CLUSTER INFORMATION

      Name       Version        GUID                       Deployed Deconfigured
================================================================================
================================================================================

[grid@ol7-183-node1 ~]$ olsnodes -s -a -t
ol7-183-node1   Active  Hub     Unpinned
ol7-183-node2   Active  Hub     Unpinned

[grid@ol7-183-node1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [70732493] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28090564 28256701 ] have been applied on the local node. The release patch string is [18.3.0.0.0].

[grid@ol7-183-node1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [70732493].
[grid@ol7-183-node1 ~]$

May 3, 2019

GRID Out Of Place (OOP) Rollback Disaster

Filed under: 18c,Grid Infrastructure,RAC — mdinh @ 4:45 pm

Now I understand the hesitation to use Oracle new features, especially any auto.

It may just be simpler and less stress to perform manual task having control and knowing what is being executed and validated.

GRID Out Of Place (OOP) patching completed successfully for 18.6.0.0.0.

GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1

Here is an example of inventory after patching.

+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

Run cluvfy was successful too.

[oracle@racnode-dc1-1 ~]$ cluvfy stage -post crsinst -n racnode-dc1-1,racnode-dc1-2 -verbose

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Apr 30, 2019 8:17:49 PM
CVU home:                     /u01/18.3.0.0/grid_2/
User:                         oracle
[oracle@racnode-dc1-1 ~]$

GRID OOP Rollback Patching completed successfully for node1.

[root@racnode-dc1-1 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode-dc1-1 ~]#
[root@racnode-dc1-1 ~]# echo $GRID_HOME
/u01/18.3.0.0/grid_2
[root@racnode-dc1-1 ~]# $GRID_HOME/OPatch/opatchauto rollback -switch-clone -logLevel FINEST

OPatchauto session is initiated at Fri May  3 01:06:47 2019

System initialization log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchautodb/systemconfig2019-05-03_01-06-50AM.log.

Session log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/opatchauto2019-05-03_01-08-00AM.log
The id for this session is R47N

Update nodelist in the inventory for oracle home /u01/18.3.0.0/grid.
Update nodelist in the inventory is completed for oracle home /u01/18.3.0.0/grid.


Bringing down CRS service on home /u01/18.3.0.0/grid
CRS service brought down successfully on home /u01/18.3.0.0/grid


Starting CRS service on home /u01/18.3.0.0/grid
CRS service started successfully on home /u01/18.3.0.0/grid


Confirm that all resources have been started from home /u01/18.3.0.0/grid.
All resources have been started successfully from home /u01/18.3.0.0/grid.


OPatchAuto successful.

--------------------------------Summary--------------------------------
Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-1
Actual Home : /u01/18.3.0.0/grid_2
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1

OPatchauto session completed at Fri May  3 01:14:25 2019
Time taken to complete the session 7 minutes, 38 seconds

[root@racnode-dc1-1 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[root@racnode-dc1-1 ~]# /media/patch/findhomes.sh
   PID NAME                 ORACLE_HOME
 10486 asm_pmon_+asm1       /u01/18.3.0.0/grid/
 10833 apx_pmon_+apx1       /u01/18.3.0.0/grid/

[root@racnode-dc1-1 ~]# cat /etc/oratab
#Backup file is  /u01/app/oracle/12.1.0.1/db1/srvm/admin/oratab.bak.racnode-dc1-1 line added by Agent
#+ASM1:/u01/18.3.0.0/grid:N
hawk1:/u01/app/oracle/12.1.0.1/db1:N
hawk:/u01/app/oracle/12.1.0.1/db1:N             # line added by Agent
[root@racnode-dc1-1 ~]#

GRID OOP Rollback Patching completed successfully for node2.

[root@racnode-dc1-2 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode-dc1-2 ~]#
[root@racnode-dc1-2 ~]# echo $GRID_HOME
/u01/18.3.0.0/grid_2
[root@racnode-dc1-2 ~]# $GRID_HOME/OPatch/opatchauto rollback -switch-clone -logLevel FINEST

OPatchauto session is initiated at Fri May  3 01:21:39 2019

System initialization log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchautodb/systemconfig2019-05-03_01-21-41AM.log.

Session log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/opatchauto2019-05-03_01-22-46AM.log
The id for this session is 9RAT

Update nodelist in the inventory for oracle home /u01/18.3.0.0/grid.
Update nodelist in the inventory is completed for oracle home /u01/18.3.0.0/grid.


Bringing down CRS service on home /u01/18.3.0.0/grid
CRS service brought down successfully on home /u01/18.3.0.0/grid


Starting CRS service on home /u01/18.3.0.0/grid
CRS service started successfully on home /u01/18.3.0.0/grid


Confirm that all resources have been started from home /u01/18.3.0.0/grid.
All resources have been started successfully from home /u01/18.3.0.0/grid.


OPatchAuto successful.

--------------------------------Summary--------------------------------
Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-2
Actual Home : /u01/18.3.0.0/grid_2
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1


OPatchauto session completed at Fri May  3 01:40:51 2019
Time taken to complete the session 19 minutes, 12 seconds
[root@racnode-dc1-2 ~]#

GRID OOP Rollback completed successfully for 18.5.0.0.0.

GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1

Here is an example of inventory after rollback.

+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

Validation shows database is OFFLINE,

+ crsctl stat res -w '((TARGET != ONLINE) or (STATE != ONLINE)' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            IDLE,STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE                               Instance Shutdown,STABLE
      2        ONLINE  OFFLINE                               Instance Shutdown,STABLE

Start database FAILED.

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance not alive for sid "hawk2"

[oracle@racnode-dc1-2 ~]$ srvctl status database -d $ORACLE_UNQNAME -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is not running on node racnode-dc1-2

[oracle@racnode-dc1-2 ~]$ srvctl start database -d $ORACLE_UNQNAME
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
[oracle@racnode-dc1-2 ~]$


[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance not alive for sid "hawk1"

[oracle@racnode-dc1-1 ~]$ srvctl start database -d hawk
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy
CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
[oracle@racnode-dc1-1 ~]$

Incorrect permissions for oracle library was the cause.
Change permissions for $GRID_HOME/bin/oracle (chmod 6751 $GRID_HOME/bin/oracle), stop and start CRS resolved the failure.

[oracle@racnode-dc1-1 dbs]$ ls -lhrt $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle dba 314M Apr 20 16:06 /u01/app/oracle/12.1.0.1/db1/bin/oracle

[oracle@racnode-dc1-1 dbs]$ ls -lhrt /u01/18.3.0.0/grid/bin/oracle
-rwxr-x--x 1 oracle oinstall 396M Apr 20 19:21 /u01/18.3.0.0/grid/bin/oracle

[oracle@racnode-dc1-1 dbs]$ cd /u01/18.3.0.0/grid/bin/
[oracle@racnode-dc1-1 bin]$ chmod 6751 oracle
[oracle@racnode-dc1-1 bin]$ ls -lhrt /u01/18.3.0.0/grid/bin/oracle
-rwsr-s--x 1 oracle oinstall 396M Apr 20 19:21 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-1 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
[root@racnode-dc1-1 ~]# crsctl stop crs

====================================================================================================

[root@racnode-dc1-2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM2"

[root@racnode-dc1-2 ~]# ls -lhrt $GRID_HOME/bin/oracle
-rwxr-x--x 1 oracle oinstall 396M Apr 21 01:44 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-2 ~]# chmod 6751 $GRID_HOME/bin/oracle
[root@racnode-dc1-2 ~]# ls -lhrt $GRID_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 396M Apr 21 01:44 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-2 ~]# crsctl stop crs

====================================================================================================

[root@racnode-dc1-2 ~]# crsctl start crs
[root@racnode-dc1-1 ~]# crsctl start crs

Reference: RAC Database Can’t Start: ORA-01565, ORA-17503: ksfdopn:10 Failed to open file +DATA/BPBL/spfileBPBL.ora (Doc ID 2316088.1)

February 23, 2019

Sed’ing Through ora.cvu Hell

Filed under: 12c,awk_sed_grep,Grid Infrastructure — mdinh @ 12:02 pm

Don’t know why I always look for trouble.

The trouble found was CHECK_RESULTS from ora.cvu.type had many issues which look to be BUGS related.

Here is the RAC environment from VM.

[oracle@racnode-dc1-1 ~]$ cat /etc/system-release
Oracle Linux Server release 7.3
[oracle@racnode-dc1-1 ~]$

[oracle@racnode-dc1-1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [0] and no patches have been applied on the local node.

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [0].

[oracle@racnode-dc1-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
racnode-dc1-2
racnode-dc1-1
PRVF-5415 : Check to see if NTP daemon or service is running failed
PRVF-7573 : Sufficient swap size is not available on node "racnode-dc1-2" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB (2097148.0KB)]
PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm
PRCW-1015 : Wallet hawk does not exist.
CLSW-9: The cluster wallet to be operated on does not exist. :[1015]
PRVF-7573 : Sufficient swap size is not available on node "racnode-dc1-1" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB (2097148.0KB)]
PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm
PRCW-1015 : Wallet hawk does not exist.
CLSW-9: The cluster wallet to be operated on does not exist. :[1015]
[oracle@racnode-dc1-1 ~]$

BUGS?

Linux OL7/RHEL7: PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm (Doc ID 2065603.1)

Bug 24696235 – cvu check results shows errors PRCW-1015 and CLSW-9 (Doc ID 24696235.8)

[root@racnode-dc1-1 ~]# ocrdump
[root@racnode-dc1-1 ~]# cat OCRDUMPFILE |grep -i SYSTEM.WALLET
[SYSTEM.WALLET]
[SYSTEM.WALLET.APPQOSADMIN]
[SYSTEM.WALLET.MGMTDB]
[root@racnode-dc1-1 ~]#

There’s is indeed no wallet for database hawk. But if wallet is created, will only result in another bug?

cluvfy:PRCQ-1000 : An error occurred while establishing connection to database with user name “DBSNMP” (Doc ID 2288958.1)

PRCQ-1000 : An error occurred while establishing connection to database with user name "DBSNMP" and connect descriptor:
ORA-01017: invalid username/password; logon denied

Cluster Verification Utility (CVU) Check Fails With NTP Configuration (Doc ID 2162408.1)

Some Good References:

Slimming Down Oracle RAC 12c’s Resource Footprint

Oracle Grid Infrastructure: change the interval for the Cluster Verification Utility (ora.cvu)

Small Notes on Clusterware resource ora.cvu

July 22, 2018

Cluster Resource To Check When Patching RAC DBFS OGG

Filed under: GoldenGate,Grid Infrastructure,RAC — mdinh @ 2:41 pm

crsctl stat res|grep -i type|sort -u

TYPE=app.appvipx.type
TYPE=local_resource
TYPE=ora.asm.type
TYPE=ora.cluster_vip_net1.type
TYPE=ora.cvu.type
TYPE=ora.database.type
TYPE=ora.diskgroup.type
TYPE=ora.listener.type
TYPE=ora.mgmtdb.type
TYPE=ora.mgmtlsnr.type
TYPE=ora.network.type
TYPE=ora.oc4j.type
TYPE=ora.ons.type
TYPE=ora.scan_listener.type
TYPE=ora.scan_vip.type
TYPE=xag.goldengate.type


crsctl stat res -p -w 'TYPE = ora.database.type' | egrep '^NAME|AUTO_START'

crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'

crsctl stat res -t -w 'TYPE = xag.goldengate.type' -- OGG Resource
crsctl stat res -t -w 'TYPE = app.appvipx.type'    -- OGG VIP
crsctl stat res -t -w 'TYPE = local_resource'      -- DBFS Mount
crsctl stat res -t -w 'TYPE = ora.database.type'   -- DB resource (including DBFS)

You might ask, why not use crsctl stat res -t?

For this specific environment, there are 190 lines of output and needed to focus on what’s important.

July 20, 2018

Patching GoldenGate with DBFS

Filed under: GoldenGate,Grid Infrastructure,RAC — mdinh @ 11:41 pm

There seems to be no consistency as to what directories should be on DBFS for when GoldenGate is implemented with RAC.

Here I will share my thoughts based on issues encountered.

oracle@test1:/opt/oracle/12.2.0/ggs01$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.170221 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_170123.1033_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Jan 23 2017 21:54:15
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.



GGSCI (test1) 1> create subdirs

Creating subdirectories under current directory /oracle/12.2.0/ggs01

Parameter files                /oracle/12.2.0/ggs01/dirprm: created
Report files                   /oracle/12.2.0/ggs01/dirrpt: created
Checkpoint files               /oracle/12.2.0/ggs01/dirchk: created
Process status files           /oracle/12.2.0/ggs01/dirpcs: created
SQL script files               /oracle/12.2.0/ggs01/dirsql: created
Database definitions files     /oracle/12.2.0/ggs01/dirdef: created
Extract data files             /oracle/12.2.0/ggs01/dirdat: created
Temporary files                /oracle/12.2.0/ggs01/dirtmp: created
Credential store files         /oracle/12.2.0/ggs01/dircrd: created
Masterkey wallet files         /oracle/12.2.0/ggs01/dirwlt: created
Dump files                     /oracle/12.2.0/ggs01/dirdmp: created


GGSCI (test1) 2> 


$ ls -ld dir*
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirchk -> /dbfs_client/ggs01/dirchk
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dircrd -> /dbfs_client/ggs01/dircrd
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirdat -> /dbfs_client/ggs01/dirdat
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirdef -> /dbfs_client/ggs01/dirdef
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirdmp -> /dbfs_client/ggs01/dirdmp
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirout -> /dbfs_client/ggs01/dirout
drwxr-x--- 2 ggsuser oinstall 4096 Mar 20  2017 dirpcs
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirprm -> /dbfs_client/ggs01/dirprm
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirrpt -> /dbfs_client/ggs01/dirrpt
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirsql -> /dbfs_client/ggs01/dirsql

GoldenGate maintains data that it swaps to disk in dirtmp.
With all the issues for DBFS, might be better on local.
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirtmp -> /dbfs_client/ggs01/dirtmp

lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirwlt -> /dbfs_client/ggs01/dirwlt
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 dirwww -> /dbfs_client/ggs01/dirwww
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 20  2017 BR -> /dbfs_client/ggs01/BR

Here are errors when applying GoldenGate Patchset.

The errors were due to the stack being down from after running opatchauto apply -norestart which results in DBFS offline for the instance.

Errors can be avoided if directories are local as they should be.

The following actions have failed:
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirout
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/image
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/schema
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style
CopyAction::apply(): cannot mkdirs on parent directory /oracle/12.2.0/ggs01/dirwww/style

Use Oracle RAC database as a baseline.
Are alert logs, trace files, etc… on shared volume if Oracle software is installed locally?

September 23, 2017

Grid Infrastructure 12.2.0.1 CRS_SWONLY Silent Install

Filed under: 12.2,Grid Infrastructure,oracle — mdinh @ 11:00 pm

Configuring Software Binaries for Oracle Grid Infrastructure for a Standalone Server

CREATE DIRECTORIES AND oraInst.loc:

[root@arrow1 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),994(vboxsf)

[root@arrow1 ~]# mkdir /u01
[root@arrow1 ~]# chmod 775 /u01/
[root@arrow1 ~]# chown oracle:oinstall /u01/

[root@arrow1 ~]# vi /etc/oraInst.loc
[root@arrow1 ~]# chown oracle:oinstall /etc/oraInst.loc

[root@arrow1 ~]# cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

[root@arrow1 ~]# ll /etc/oraInst.loc
-rw-r--r-- 1 oracle oinstall 56 Sep 23 17:11 /etc/oraInst.loc
[root@arrow1 ~]# exit

[oracle@arrow1 ~]$ mkdir -p /u01/app/12.2.0.1/grid
[oracle@arrow1 ~]$ mkdir -p /u01/app/grid
[oracle@arrow1 ~]$ mkdir -p /u01/app/oraInventory

EXTRACT SOFTWARE DIRECTLY TO GRID HOME:

[oracle@arrow1 ~]$ ls -l /sf_OracleSoftware/12.2.0.1_Linux_64/
total 6297251
-rwxrwxrwx 1 vagrant vagrant 3453696911 Apr 12 09:44 linuxx64_12201_database.zip
-rwxrwxrwx 1 vagrant vagrant 2994687209 Apr 12 09:43 linuxx64_12201_grid_home.zip

[oracle@arrow1 ~]$ unzip -d /u01/app/12.2.0.1/grid -qo /sf_OracleSoftware/12.2.0.1_Linux_64/linuxx64_12201_grid_home.zip; echo $?
0

[oracle@arrow1 ~]$ ls /u01/app/12.2.0.1/grid
addnode     crs     deinstall    gpnp           inventory  lib      opmn     oui      qos       rootupgrade.sh  srvm      welcome.html
assistants  css     demo         gridSetup.sh   javavm     log      oracore  owm      racg      runcluvfy.sh    suptools  wlm
bin         cv      diagnostics  has            jdbc       md       ord      perl     rdbms     scheduler       tomcat    wwg
cdata       dbjava  dmu          hs             jdk        network  ordim    plsql    relnotes  slax            ucp       xag
cha         dbs     env.ora      install        jlib       nls      ords     precomp  rhp       sqlpatch        usm       xdk
clone       dc_ocm  evm          instantclient  ldap       OPatch   oss      QOpatch  root.sh   sqlplus         utl

[oracle@arrow1 ~]$ ls /u01/app/12.2.0.1/grid/install/response/
grid_2017-01-26_04-10-28PM.rsp  gridsetup.rsp  sample.ccf
[oracle@arrow1 ~]$

CREATE RESPONSE FILE FOR CRS_SWONLY – WITHOUT ASM:

CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster or stand alone server later)

oracle@arrow1::/u01/app/12.2.0.1/grid/install/response
$ cp -v gridsetup.rsp grid_crs_swonly.rsp
‘gridsetup.rsp’ -> ‘grid_crs_swonly.rsp’

oracle@arrow1::/u01/app/12.2.0.1/grid/install/response
$ diff -iwyB --suppress-common-lines -W 150 gridsetup.rsp grid_crs_swonly.rsp; echo
oracle.install.option=                                                    |     oracle.install.option=CRS_SWONLY
ORACLE_BASE=                                                              |     ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=                                                 |     oracle.install.asm.OSDBA=dba
oracle.install.asm.OSOPER=                                                |     oracle.install.asm.OSOPER=dba
oracle.install.asm.OSASM=                                                 |     oracle.install.asm.OSASM=dba
oracle@arrow1::/u01/app/12.2.0.1/grid/install/response

RUNCLUVFY FOR HACFG:

oracle@arrow1::/u01/app/12.2.0.1/grid
$ ./runcluvfy.sh stage -pre hacfg

Verifying Physical Memory ...FAILED (PRVF-7530)
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: arrow1:/usr,arrow1:/var,arrow1:/etc,arrow1:/sbin,arrow1:/tmp ...PASSED
Verifying User Existence: oracle ...
  Verifying Users With Same UID: 54321 ...PASSED
Verifying User Existence: oracle ...PASSED
Verifying Group Existence: dba ...PASSED
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: oinstall(Primary) ...PASSED
Verifying Group Membership: dba ...PASSED
Verifying Run Level ...PASSED
Verifying Hard Limit: maximum open file descriptors ...PASSED
Verifying Soft Limit: maximum open file descriptors ...PASSED
Verifying Hard Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum stack size ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.23.52.0.1 ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED

Pre-check for Oracle Restart configuration was unsuccessful.


Failures were encountered during execution of CVU verification request "stage -pre hacfg".

Verifying Physical Memory ...FAILED
arrow1: PRVF-7530 : Sufficient physical memory is not available on node
        "arrow1" [Required physical memory = 8GB (8388608.0KB)]


CVU operation performed:      stage -pre hacfg
Date:                         Sep 23, 2017 5:22:21 PM
CVU home:                     /u01/app/12.2.0.1/grid/
User:                         oracle
oracle@arrow1::/u01/app/12.2.0.1/grid
$

INSTALL GRID USING gridSetup.sh

oracle@arrow1::/u01/app/12.2.0.1/grid
$ ./gridSetup.sh -silent -waitforcompletion -skipPrereqs -responseFile /u01/app/12.2.0.1/grid/install/response/grid_crs_swonly.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-41808] Possible invalid choice for OSASM Group.
   CAUSE: The name of the group you selected for the OSASM group is commonly used to grant other system privileges (For example: asmdba, asmoper, dba, oper).
   ACTION: Oracle recommends that you designate asmadmin as the OSASM group.
[WARNING] [INS-41809] Possible invalid choice for OSDBA Group.
   CAUSE: The group name you selected as the OSDBA for ASM group is commonly used for Oracle Database administrator privileges.
   ACTION: Oracle recommends that you designate asmdba as the OSDBA for ASM group, and that the group should not be the same group as an Oracle Database OSDBA group.
[WARNING] [INS-41810] Possible invalid choice for OSOPER Group.
   CAUSE: The group name you selected as the OSOPER for ASM group is commonly used for Oracle Database administrator privileges.
   ACTION: Oracle recommends that you designate asmoper as the OSOPER for ASM group, and that the group should not be the same group as an Oracle Database OSOPER group.
[WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
   CAUSE: The group you selected for granting the OSDBA for ASM group for database access, and the OSOPER for ASM group for startup and shutdown of Oracle ASM, is the same group as the OSASM group, whose members have SYSASM privileges on Oracle ASM.
   ACTION: Choose different groups as the OSASM, OSDBA for ASM, and OSOPER for ASM groups.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2017-09-23_05-52-14PM/gridSetupActions2017-09-23_05-52-14PM.log

As a root user, execute the following script(s):
        1. /u01/app/12.2.0.1/grid/root.sh

Execute /u01/app/12.2.0.1/grid/root.sh on the following nodes:
[arrow1]


Successfully Setup Software.
oracle@arrow1::/u01/app/12.2.0.1/grid
$

RUN root.sh

[root@arrow1 ~]# /u01/app/12.2.0.1/grid/root.sh
Check /u01/app/12.2.0.1/grid/install/root_arrow1_2017-09-23_17-54-24-226719765.log for the output of root script

[root@arrow1 ~]# cat /u01/app/12.2.0.1/grid/install/root_arrow1_2017-09-23_17-54-24-226719765.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.2.0.1/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Cluster or Grid Infrastructure for a Stand-Alone Server execute the following command as oracle user:
/u01/app/12.2.0.1/grid/gridSetup.sh
This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

[root@arrow1 ~]#

This is where Oracle has take a turn for the worse.
Instructions are not provided after running root.sh and are incorrect!

AS ROOT RUN roothas.pl

[root@arrow1 ~]# /u01/app/12.2.0.1/grid/perl/bin/perl -I /u01/app/12.2.0.1/grid/perl/lib -I /u01/app/12.2.0.1/grid/crs/install /u01/app/12.2.0.1/grid/crs/install/roothas.pl
Using configuration parameter file: /u01/app/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/arrow1/crsconfig/roothas_2017-09-23_06-11-34PM.log
2017/09/23 18:11:35 CLSRSC-363: User ignored prerequisites during installation
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node arrow1 successfully pinned.
2017/09/23 18:11:44 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'arrow1'
CRS-2673: Attempting to stop 'ora.evmd' on 'arrow1'
CRS-2677: Stop of 'ora.evmd' on 'arrow1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'arrow1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.

arrow1     2017/09/23 18:12:54     /u01/app/12.2.0.1/grid/cdata/arrow1/backup_20170923_181254.olr     0
2017/09/23 18:12:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
[root@arrow1 ~]#

AS ORACLE runInstaller -updateNodeList

oracle@arrow1::/u01/app/12.2.0.1/grid
$ cd oui/bin/

oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0.1/grid -defaultHomeName CLUSTER_NODES= CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4080 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin

CHECK PROCESESS

oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$ ps -ef|grep oracle
root      3796  1125  0 17:03 ?        00:00:00 sshd: oracle [priv]
oracle    3798  3796  0 17:03 ?        00:00:01 sshd: oracle@pts/2
oracle    3799  3798  0 17:03 pts/2    00:00:00 -bash
root     13572  1125  0 17:39 ?        00:00:00 sshd: oracle [priv]
oracle   13574 13572  0 17:39 ?        00:00:00 sshd: oracle@pts/0
oracle   13575 13574  0 17:39 pts/0    00:00:00 -bash
oracle   25084     1  0 18:12 ?        00:00:03 /u01/app/12.2.0.1/grid/bin/ohasd.bin reboot
oracle   25197     1  0 18:12 ?        00:00:02 /u01/app/12.2.0.1/grid/bin/oraagent.bin
oracle   25215     1  0 18:12 ?        00:00:01 /u01/app/12.2.0.1/grid/bin/evmd.bin
oracle   25269 25215  0 18:12 ?        00:00:01 /u01/app/12.2.0.1/grid/bin/evmlogger.bin -o /u01/app/12.2.0.1/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u01/app/12.2.0.1/grid/log/[HOSTNAME]/evmd/evmlogger.log
oracle   25404  3799  0 18:27 pts/2    00:00:00 ps -ef
oracle   25405  3799  0 18:27 pts/2    00:00:00 grep --color=auto oracle
oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$

CHECK INSTALL LOG

oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$ export LOG=/u01/app/oraInventory/logs/GridSetupActions2017-09-23_05-52-14PM/gridSetupActions2017-09-23_05-52-14PM.log

oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$ grep -e '[[:upper:]]: ' $LOG |cut -d ":" -f1|sort -u
   ACTION
   CAUSE
INFO
WARNING

oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$ grep -e '[[:upper:]]: ' $LOG|grep "^WARNING: "
WARNING:  [Sep 23, 2017 5:52:18 PM] Unable to find the namespace URI. Reason: Start of root element expected.
WARNING:  [Sep 23, 2017 5:52:18 PM] Unable to find the namespace URI. Reason: Start of root element expected.
WARNING:  [Sep 23, 2017 5:52:21 PM] Validation disabled for the state init
WARNING:  [Sep 23, 2017 5:52:26 PM] Validation disabled for the state flowDecider
WARNING:  [Sep 23, 2017 5:52:26 PM] Validation disabled for the state CRSNodeInformationUI
WARNING:  [Sep 23, 2017 5:52:26 PM] No Local Domain found, setting to whatever first domain is available
WARNING:  [Sep 23, 2017 5:52:26 PM] No Domain found, skipping Same Domain validation
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41808] Possible invalid choice for OSASM Group.
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41809] Possible invalid choice for OSDBA Group.
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41810] Possible invalid choice for OSOPER Group.
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
WARNING:  [Sep 23, 2017 5:52:29 PM] Validation disabled for the state prereqExecutionDecider
WARNING:  [Sep 23, 2017 5:52:29 PM] Validation disabled for the state showSummary
WARNING:  [Sep 23, 2017 5:52:29 PM] Unable to find the namespace URI. Reason: /u01/app/12.2.0.1/grid/install/response/grid_2017-09-23_05-52-14PM.rsp (No such file or directory)
WARNING:  [Sep 23, 2017 5:53:50 PM] Validation disabled for the state finish
WARNING:  [Sep 23, 2017 5:53:50 PM] Could not create directory: /u01/app/oraInventory/logs/GridSetupActions2017-09-23_05-52-14PM
oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$

Create a free website or blog at WordPress.com.