Thinking Out Loud

October 7, 2019

Be Careful When Subscribing To Oracle Learning Subscription

Filed under: Uncategorized — mdinh @ 11:53 pm

Subscribing to Oracle Learning Subscription seems good in theory but bad in reality.

Oracle support informed. “Oracle University’s policy regarding Learning Subscription courseware materials is that they cannot be downloaded by customers.”

How convenience of Oracle as the info should have been stated at https://education.oracle.com/oracle-learning-subscriptions

Took for granted materials can be downloaded since they are made available to download for all other training formats.

This seems to be a deceptive process by not disclosing the information. because by the time one has subscribe to find the lack of full disclosure, it may be too late.

Hopefully, this will help anyone to avoid the same mistake.

 

Advertisements

September 14, 2019

Duplicate OMF DB Using Backupset To New Host And Directories

Filed under: 12c,cloning,RMAN — mdinh @ 3:23 pm

Database 12.1.0.2.0 is created using OMF.

At SOURCE:
db_create_file_dest=/u01/app/oracle/oradata
db_recovery_file_dest=/u01/app/oracle/fast_recovery_area

At DESTINATION:
db_create_file_dest=/u01/oradata
db_recovery_file_dest=/u01/fast_recovery_area

BACKUP LOCATION on shared storage:
/media/shared_storage/rman_backup/EMU

I has to explicitly set control_files since I was not able to determine how it can be done automatically.

set control_files='/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl','/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl'

If don’t set_controlfiles then duplicate will restore to original locations.

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
output file name=/u01/app/oracle/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

STEPS:

================================================================================
### SOURCE: Backup Database
================================================================================

--------------------------------------------------
### Retrieve controlfile locations.
--------------------------------------------------

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
/u01/app/oracle/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

SQL>

--------------------------------------------------
### Retrieve redo logs locations.
--------------------------------------------------

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_1_gqsq2mvy_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_1_gqsq2myy_.log
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_2_gqsq2n1o_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_2_gqsq2n3g_.log
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_3_gqsq2n50_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_3_gqsq2nql_.log

6 rows selected.

SQL>

--------------------------------------------------
### Backup database.
--------------------------------------------------

[oracle@ol741 EMU]$ export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
[oracle@ol741 EMU]$ rman @ backup.rman

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:09:06 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> spool log to /media/shared_storage/rman_backup/EMU/rman_EMU_level0.log
2> set echo on
3> connect target;
4> show all;
5> set command id to "BACKUP_EMU";
6> run {
7> allocate channel d1 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
8> allocate channel d2 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
9> allocate channel d3 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
10> allocate channel d4 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
11> allocate channel d5 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
12> backup as compressed backupset incremental level 0 check logical database filesperset 1 tag="EMU"
13> plus archivelog filesperset 8 tag="EMU"
14> ;
15> }
16> alter database backup controlfile to trace as '/media/shared_storage/rman_backup/EMU/cf_@.sql' REUSE RESETLOGS;
17> create pfile='/media/shared_storage/rman_backup/EMU/init@.ora' from spfile;
18> list backup summary tag="EMU";
19> list backup of spfile tag="EMU";
20> list backup of controlfile tag="EMU";
21> report schema;
22> exit

--------------------------------------------------
### Retrive datafiles and tempfiles locations.
--------------------------------------------------

[oracle@ol741 EMU]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:09:44 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EMU (DBID=3838062773)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name EMU

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    700      SYSTEM               YES     /u01/app/oracle/oradata/EMU/datafile/o1_mf_system_gqsq2qw4_.dbf
2    550      SYSAUX               NO      /u01/app/oracle/oradata/EMU/datafile/o1_mf_sysaux_gqsq30xo_.dbf
3    265      UNDOTBS1             YES     /u01/app/oracle/oradata/EMU/datafile/o1_mf_undotbs1_gqsq3875_.dbf
4    5        USERS                NO      /u01/app/oracle/oradata/EMU/datafile/o1_mf_users_gqsq405f_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /u01/app/oracle/oradata/EMU/datafile/o1_mf_temp_gqsq3c3d_.tmp

RMAN> exit

Recovery Manager complete.
[oracle@ol741 EMU]$


================================================================================
### TARGET: Restore Database
================================================================================

--------------------------------------------------
### Create pfile.
--------------------------------------------------

[oracle@ol742 dbs]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/dbs

[oracle@ol742 dbs]$ cat initemu.ora
*.db_name='emu'
[oracle@ol742 dbs]$

--------------------------------------------------
### Startup nomount.
--------------------------------------------------

[oracle@ol742 EMU]$ . oraenv << startup nomount;
ORACLE instance started.

Total System Global Area  234881024 bytes
Fixed Size                  2922904 bytes
Variable Size             176162408 bytes
Database Buffers           50331648 bytes
Redo Buffers                5464064 bytes
SQL>

--------------------------------------------------
### Create new directories.
--------------------------------------------------

[oracle@ol742 EMU]$ mkdir -p /u01/oradata/EMU/controlfile/
[oracle@ol742 EMU]$ mkdir -p /u01/fast_recovery_area/EMU/controlfile

--------------------------------------------------
### Duplicate database.
--------------------------------------------------

[oracle@ol742 EMU]$ export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
[oracle@ol742 EMU]$ rman @ dup_omf_bks.rman

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:37:51 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> spool log to /media/shared_storage/rman_backup/EMU/rman_duplicate_database.log
2> set echo on
3> connect auxiliary *
4> show all;
5> set command id to "DUPLICATE_EMU";
6> DUPLICATE DATABASE TO emu
7>   SPFILE
8>   set db_file_name_convert='/u01/app/oracle/oradata','/u01/oradata'
9>   set log_file_name_convert='/u01/app/oracle/oradata','/u01/oradata'
10>   set db_create_file_dest='/u01/oradata'
11>   set db_recovery_file_dest='/u01/fast_recovery_area'
12>   set control_files='/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl','/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl'
13>   BACKUP LOCATION '/media/shared_storage/rman_backup/EMU'
14>   NOFILENAMECHECK
15> ;
16> exit

--------------------------------------------------
### Retrive datafiles and tempfiles locations.
--------------------------------------------------

[oracle@ol742 EMU]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:40:33 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EMU (DBID=3838070815)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name EMU

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    700      SYSTEM               YES     /u01/oradata/EMU/datafile/o1_mf_system_gqsyvo1y_.dbf
2    550      SYSAUX               NO      /u01/oradata/EMU/datafile/o1_mf_sysaux_gqsywg40_.dbf
3    265      UNDOTBS1             YES     /u01/oradata/EMU/datafile/o1_mf_undotbs1_gqsywx7n_.dbf
4    5        USERS                NO      /u01/oradata/EMU/datafile/o1_mf_users_gqsyxd90_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /u01/oradata/EMU/datafile/o1_mf_temp_gqsyy469_.tmp

RMAN> exit


Recovery Manager complete.
[oracle@ol742 EMU]$

--------------------------------------------------
### Retrieve controlfile locations.
--------------------------------------------------

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

SQL>

--------------------------------------------------
### Retrieve redo logs locations.
--------------------------------------------------

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u01/oradata/EMU/onlinelog/o1_mf_3_gqsyy2kh_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_3_gqsyy2lx_.log
/u01/oradata/EMU/onlinelog/o1_mf_2_gqsyy165_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_2_gqsyy17p_.log
/u01/oradata/EMU/onlinelog/o1_mf_1_gqsyxztc_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_1_gqsyxzw3_.log

6 rows selected.

SQL>

Logs:

rman_EMU_level0.log

rman_duplicate_database.log

 

August 12, 2019

Obvious But Not For Oracle Obviously

Filed under: 12c — mdinh @ 6:38 pm

While dropping RAC database, I found error ORA-01081: cannot start already-running ORACLE – shut it down first from dbca log.

Looking up error: Cause is obvious

$ oerr ora 01081
01081, 00000, "cannot start already-running ORACLE - shut it down first"
// *Cause:  Obvious
// *Action:

Here is the process for 12.1.0:

$ ps -ef|grep pmon
oracle   41777     1  0 Aug09 ?        00:00:30 asm_pmon_+ASM2

$ srvctl config database
DBFS

$ srvctl status database -d DBFS -v
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Instance DBFS1 is not running on node node1
Instance DBFS2 is not running on node node2
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

$ dbca -silent -deleteDatabase -sourceDB DBFS
Connecting to database
9% complete
14% complete
19% complete
23% complete
28% complete
33% complete
38% complete
47% complete
Updating network configuration files
48% complete
52% complete
Deleting instances and datafiles
66% complete
80% complete
95% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/DBFS.log" for further details.

$ cat /u01/app/oracle/cfgtoollogs/dbca/DBFS.log
The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. 
All information in the database will be deleted. Do you want to proceed?
Connecting to database
DBCA_PROGRESS : 9%
DBCA_PROGRESS : 14%
DBCA_PROGRESS : 19%
DBCA_PROGRESS : 23%
DBCA_PROGRESS : 28%
DBCA_PROGRESS : 33%
DBCA_PROGRESS : 38%
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ORA-01081: cannot start already-running ORACLE - shut it down first
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DBCA_PROGRESS : 47%
Updating network configuration files
DBCA_PROGRESS : 48%
DBCA_PROGRESS : 52%
Deleting instances and datafiles
DBCA_PROGRESS : 66%
DBCA_PROGRESS : 80%
DBCA_PROGRESS : 95%
DBCA_PROGRESS : 100%
Database deletion completed.

$ srvctl config database
$

August 6, 2019

19c Grid Dry-Run Upgrade

Filed under: 19c,awk_sed_grep,Grid Infrastructure,upgrade — mdinh @ 12:42 pm

First test using GUI.

[oracle@racnode-dc2-1 grid]$ /u01/app/19.3.0.0/grid/gridSetup.sh -dryRunForUpgrade
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_00-20-31AM/gridSetupActions2019-08-06_00-20-31AM.log
[oracle@racnode-dc2-1 grid]$

Create dryRunForUpgradegrid.rsp from grid_2019-08-06_00-20-31AM.rsp (above GUI test)

[oracle@racnode-dc2-1 grid]$ grep -v "^#" /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp | grep -v "=$" | awk 'NF' > /home/oracle/dryRunForUpgradegrid.rsp

[oracle@racnode-dc2-1 ~]$ cat /home/oracle/dryRunForUpgradegrid.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=vbox-rac-dc2
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=racnode-dc2-1:,racnode-dc2-2:
oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=CRS
oracle.install.asm.diskGroup.AUSize=0
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=false
[oracle@racnode-dc2-1 ~]$

Create directory grid home for all nodes:

[root@racnode-dc2-1 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54318(asmdba),54322(dba),54323(backupdba),54324(oper),54325(dgdba),54326(kmdba)

[root@racnode-dc2-1 ~]# mkdir -p /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chown oracle:oinstall /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chmod 775 /u01/app/19.3.0.0/grid

[root@racnode-dc2-1 ~]# ll /u01/app/19.3.0.0/
total 4
drwxrwxr-x 2 oracle oinstall 4096 Aug  6 02:07 grid
[root@racnode-dc2-1 ~]#

Extract grid software for node1 ONLY:

[oracle@racnode-dc2-1 ~]$ unzip -qo /media/swrepo/LINUX.X64_193000_grid_home.zip -d /u01/app/19.3.0.0/grid/

[oracle@racnode-dc2-1 ~]$ ls /u01/app/19.3.0.0/grid/
addnode     clone  dbjava     diagnostics  gpnp          install        jdbc  lib      OPatch   ords  perl     qos       rhp            rootupgrade.sh  sqlpatch  tomcat  welcome.html  xdk
assistants  crs    dbs        dmu          gridSetup.sh  instantclient  jdk   md       opmn     oss   plsql    racg      root.sh        runcluvfy.sh    sqlplus   ucp     wlm
bin         css    deinstall  env.ora      has           inventory      jlib  network  oracore  oui   precomp  rdbms     root.sh.old    sdk             srvm      usm     wwg
cha         cv     demo       evm          hs            javavm         ldap  nls      ord      owm   QOpatch  relnotes  root.sh.old.1  slax            suptools  utl     xag

[oracle@racnode-dc2-1 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.0G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-1 ~]$

Run gridSetup.sh -silent -dryRunForUpgrade:

[oracle@racnode-dc2-1 ~]$ env|grep -i ora
USER=oracle
MAIL=/var/spool/mail/oracle
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
HOME=/home/oracle
LOGNAME=oracle

[oracle@racnode-dc2-1 ~]$ date
Tue Aug  6 02:35:47 CEST 2019

[oracle@racnode-dc2-1 ~]$ /u01/app/19.3.0.0/grid/gridSetup.sh -silent -dryRunForUpgrade -responseFile /home/oracle/dryRunForUpgradegrid.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_02-35-52AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log


As a root user, execute the following script(s):
        1. /u01/app/19.3.0.0/grid/rootupgrade.sh

Execute /u01/app/19.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc2-1]

Run the script on the local node.

Successfully Setup Software with warning(s).
[oracle@racnode-dc2-1 ~]$

Run rootupgrade.sh for node1 ONLY and review log:

[root@racnode-dc2-1 ~]# /u01/app/19.3.0.0/grid/rootupgrade.sh
Check /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log for the output of root script

[root@racnode-dc2-1 ~]# cat /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Performing Dry run of the Grid Infrastructure upgrade.
Using configuration parameter file: /u01/app/19.3.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/racnode-dc2-1/crsconfig/rootcrs_racnode-dc2-1_2019-08-06_02-45-31AM.log
2019/08/06 02:45:44 CLSRSC-464: Starting retrieval of the cluster configuration data
2019/08/06 02:45:52 CLSRSC-729: Checking whether CRS entities are ready for upgrade, cluster upgrade will not be attempted now. This operation may take a few minutes.
2019/08/06 02:47:56 CLSRSC-693: CRS entities validation completed successfully.
[root@racnode-dc2-1 ~]#

Check grid home for node2:

[oracle@racnode-dc2-2 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.6G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-2 ~]$

Check oraInventory for ALL nodes:

[oracle@racnode-dc2-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.7.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.2.0.1/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.2.0.1/db1" TYPE="O" IDX="2"/>
==========================================================================================
<HOME NAME="OraGI19Home1" LOC="/u01/app/19.3.0.0/grid" TYPE="O" IDX="3"/>
==========================================================================================
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc2-2 ~]$

Check crs activeversion: 12.2.0.1.0

[oracle@racnode-dc2-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc2-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [927320293].
[oracle@racnode-dc2-1 ~]$

Check log location:

[oracle@racnode-dc2-1 ~]$ cd /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/

[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$ ls -alrt
total 17420
-rw-r-----  1 oracle oinstall     129 Aug  6 02:35 installerPatchActions_2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall       0 Aug  6 02:35 gridSetupActions2019-08-06_02-35-52AM.err
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:35 temp_ob
-rw-r-----  1 oracle oinstall       0 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.err
drwxrwx--- 17 oracle oinstall    4096 Aug  6 02:39 ..
-rw-r-----  1 oracle oinstall     157 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall       0 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.err.racnode-dc2-2
-rw-r-----  1 oracle oinstall     142 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.out.racnode-dc2-2
-rw-r-----  1 oracle oinstall 9341920 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall   13419 Aug  6 02:43 time2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall 8443087 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.log
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:56 .
[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$

After dryRunForUpgrade, detach 19.3.0.0 grid home and remove directory (19.3.0.0/grid) from all nodes.

export ORACLE_HOME=/u01/app/19.3.0.0/grid
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$ORACLE_HOME

August 1, 2019

Too Old To Remember

Filed under: 12c — mdinh @ 5:01 pm

Is it required to run datapatch after creating database?

Why bother trying to remember versus running datapatch -prereq to find out?

Test case for 12.2.

Database July 2019 Release Update 12.2 applied:

[oracle@racnode-dc2-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/app/12.2.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/12.2.0.1/grid/OPatch/opatch lspatches
29770090;ACFS JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770090)
29770040;OCW JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770040)
29757449;Database Jul 2019 Release Update : 12.2.0.1.190716 (29757449)
28566910;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:180802.1448.S) (28566910)
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)

OPatch succeeded.
+ exit
[oracle@racnode-dc2-1 ~]$

Create 12.2 RAC database:

[oracle@racnode-dc2-1 ~]$ dbca -silent -createDatabase -characterSet AL32UTF8 \
> -createAsContainerDatabase true \
> -templateName General_Purpose.dbc \
> -gdbname hawkcdb -sid hawkcdb -responseFile NO_VALUE \
> -sysPassword Oracle_4U! -systemPassword Oracle_4U! \
> -numberOfPDBs 1 -pdbName pdb01 -pdbAdminPassword Oracle_4U! \
> -databaseType MULTIPURPOSE \
> -automaticMemoryManagement false -totalMemory 3072 \
> -storageType ASM -diskGroupName DATA -recoveryGroupName FRA \
> -redoLogFileSize 100 \
> -emConfiguration NONE \
> -nodeinfo racnode-dc2-1,racnode-dc2-2 \
> -listeners LISTENER \
> -ignorePreReqs

Copying database files
21% complete
Creating and starting Oracle instance
35% complete
Creating cluster database views
50% complete
Completing Database Creation
57% complete
Creating Pluggable Databases
78% complete
Executing Post Configuration Actions
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/hawkcdb/hawkcdb.log" for further details.
[oracle@racnode-dc2-1 ~]$

Run datapatch -prereq for 12.2

[oracle@racnode-dc2-1 ~]$ $ORACLE_HOME/OPatch/datapatch -prereq
SQL Patching tool version 12.2.0.1.0 Production on Thu Aug  1 17:45:13 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Determining current state...done
Adding patches to installation queue and performing prereq checks...done

**********************************************************************
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB01
    Nothing to roll back
    Nothing to apply
**********************************************************************

SQL Patching tool complete on Thu Aug  1 17:46:39 2019
[oracle@racnode-dc2-1 ~]$

Test case for 12.1.

Database July 2019 Bundle Patch 12.1 applied:

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.2/grid
ORACLE_HOME=/u01/app/12.1.0.2/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/app/12.1.0.2/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/12.1.0.2/grid/OPatch/opatch lspatches
29509318;OCW PATCH SET UPDATE 12.1.0.2.190716 (29509318)
29496791;Database Bundle Patch : 12.1.0.2.190716 (29496791)
29423125;ACFS PATCH SET UPDATE 12.1.0.2.190716 (29423125)
26983807;WLM Patch Set Update: 12.1.0.2.180116 (26983807)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Create 12.1 RAC database:

[oracle@racnode-dc1-1 ~]$ dbca -silent -createDatabase -characterSet AL32UTF8 \
> -createAsContainerDatabase true \
> -templateName General_Purpose.dbc \
> -gdbname cdbhawk -sid cdbhawk -responseFile NO_VALUE \
> -sysPassword Oracle_4U! -systemPassword Oracle_4U! \
> -numberOfPDBs 1 -pdbName pdb01 -pdbAdminPassword Oracle_4U! \
> -databaseType MULTIPURPOSE \
> -automaticMemoryManagement false -totalMemory 3072 \
> -storageType ASM -diskGroupName DATA -recoveryGroupName FRA \
> -redoLogFileSize 100 \
> -emConfiguration NONE \
> -nodeinfo racnode-dc1-1,racnode-dc1-2 \
> -listeners LISTENER \
> -ignorePreReqs

Copying database files
23% complete
Creating and starting Oracle instance
38% complete
Creating cluster database views
54% complete
Completing Database Creation
77% complete
Creating Pluggable Databases
81% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdbhawk/cdbhawk.log" for further details.
[oracle@racnode-dc1-1 ~]$

Run datapatch -prereq for 12.2

[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/datapatch -prereq
SQL Patching tool version 12.1.0.2.0 Production on Thu Aug  1 18:24:53 2019
Copyright (c) 2012, 2017, Oracle.  All rights reserved.

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Adding patches to installation queue and performing prereq checks...done

**********************************************************************
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB01
    Nothing to roll back
    The following patches will be applied:
      29496791 (DATABASE BUNDLE PATCH 12.1.0.2.190716)
**********************************************************************

SQL Patching tool complete on Thu Aug  1 18:26:26 2019
[oracle@racnode-dc1-1 ~]$

For 12.1, datapatch is required and not for 12.2.

July 24, 2019

Rsync DBFS To ACFS For GoldenGate Trail Migration

Filed under: GoldenGate,shell scripting — mdinh @ 2:25 pm

Planning to move GoldenGate trail files from DBFS to ACFS.

This is pre-work before actual migration to stress IO for ACFS.

Learned some cron along the way.

# Run every 2 hours at even hours
0 */2 * * * /home/oracle/working/dinh/acfs_ggdata02_rsync.sh > /tmp/rsync_acfs_ggdata_to_ggdata02.log 2>&1

# Run every 2 hours at odd hours
0 1-23/2 * * * /home/oracle/working/dinh/acfs_ggdata02_rsync.sh > /tmp/rsync_acfs_ggdata_to_ggdata02.log 2>&1

Syntax and ouptput.

+ /bin/rsync -vrpogt --delete-after /DBFS/ggdata/ /ACFS/ggdata
building file list ... done

dirchk/E_SOURCE.cpe
dirchk/P_TARGET.cpe

dirdat/
dirdat/aa000307647
dirdat/aa000307648
.....
dirdat/aa000307726
dirdat/aa000307727

deleting dirdat/aa000306741
deleting dirdat/aa000306740
.....
deleting dirdat/aa000306662
deleting dirdat/aa000306661

sent 16,205,328,959 bytes  received 1,743 bytes  140,305,893.52 bytes/sec
total size is 203,021,110,174  speedup is 12.53

real	1m56.671s
user	1m24.643s
sys	0m45.875s

+ '[' 0 '!=' 0 ']'

+ /bin/diff -rq /DBFS/ggdata /ACFS/ggdata

Files /DBFS/ggdata/dirchk/E_SOURCE.cpe and /ACFS/ggdata/dirchk/E_SOURCE.cpe differ
Files /DBFS/ggdata/dirchk/P_TARGET.cpe and /ACFS/ggdata/dirchk/P_TARGET.cpe differ

Only in /ACFS/ggdata/dirdat: aa000306742
Only in /ACFS/ggdata/dirdat: aa000306743
Only in /ACFS/ggdata/dirdat: aa000306744
Only in /ACFS/ggdata/dirdat: aa000306745

Only in /DBFS/ggdata/dirdat: aa000307728
Only in /DBFS/ggdata/dirdat: aa000307729

real	69m15.207s
user	2m9.242s
sys	17m3.846s

+ ls /DBFS/ggdata/dirdat/
+ wc -l
975

+ ls -alrt /DBFS/ggdata/dirdat/
+ head
total 190631492
drwxrwxrwx 24 root    root             0 Feb  9  2018 ..
-rw-r-----  1 ggsuser oinstall 199999285 Mar  8  2018 .fuse_hidden001a3c47000001c5
-rw-r-----  1 ggsuser oinstall 199999896 May 23 00:23 .fuse_hidden000002b500000001
-rw-r-----  1 ggsuser oinstall 199999934 Jul 23 06:11 aa000306798
-rw-r-----  1 ggsuser oinstall 199999194 Jul 23 06:13 aa000306799
-rw-r-----  1 ggsuser oinstall 199999387 Jul 23 06:14 aa000306800
-rw-r-----  1 ggsuser oinstall 199999122 Jul 23 06:16 aa000306801
-rw-r-----  1 ggsuser oinstall 199999172 Jul 23 06:19 aa000306802
-rw-r-----  1 ggsuser oinstall 199999288 Jul 23 06:19 aa000306803

+ ls -alrt /DBFS/ggdata/dirdat/
+ tail
-rw-r-----  1 ggsuser oinstall 199999671 Jul 24 07:59 aa000307764
-rw-r-----  1 ggsuser oinstall 199999645 Jul 24 08:01 aa000307765
-rw-r-----  1 ggsuser oinstall 199998829 Jul 24 08:02 aa000307766
-rw-r-----  1 ggsuser oinstall 199998895 Jul 24 08:04 aa000307767
-rw-r-----  1 ggsuser oinstall 199999655 Jul 24 08:05 aa000307768
-rw-r-----  1 ggsuser oinstall 199999930 Jul 24 08:07 aa000307769
-rw-r-----  1 ggsuser oinstall 199999761 Jul 24 08:09 aa000307770
-rw-r-----  1 ggsuser oinstall 199999421 Jul 24 08:11 aa000307771
-rw-r-----  1 ggsuser oinstall   7109055 Jul 24 08:11 aa000307772

+ ls /ACFS/ggdata/dirdat/
+ wc -l
986

+ ls -alrt /ACFS/ggdata/dirdat/
+ head
total 194779104
drwxrwxrwx 24 root    root          8192 Feb  9  2018 ..
-rw-r-----  1 ggsuser oinstall 199999285 Mar  8  2018 .fuse_hidden001a3c47000001c5
-rw-r-----  1 ggsuser oinstall 199999896 May 23 00:23 .fuse_hidden000002b500000001
-rw-r-----  1 ggsuser oinstall 199998453 Jul 23 04:55 aa000306742
-rw-r-----  1 ggsuser oinstall 199999657 Jul 23 04:56 aa000306743
-rw-r-----  1 ggsuser oinstall 199999227 Jul 23 04:57 aa000306744
-rw-r-----  1 ggsuser oinstall 199999389 Jul 23 04:59 aa000306745
-rw-r-----  1 ggsuser oinstall 199999392 Jul 23 05:00 aa000306746
-rw-r-----  1 ggsuser oinstall 199999116 Jul 23 05:01 aa000306747

+ ls -alrt /ACFS/ggdata/dirdat/
+ tail
-rw-r-----  1 ggsuser oinstall 199999876 Jul 24 06:48 aa000307719
-rw-r-----  1 ggsuser oinstall 199999751 Jul 24 06:50 aa000307720
-rw-r-----  1 ggsuser oinstall 199999918 Jul 24 06:51 aa000307721
-rw-r-----  1 ggsuser oinstall 199999404 Jul 24 06:52 aa000307722
-rw-r-----  1 ggsuser oinstall 199999964 Jul 24 06:54 aa000307723
-rw-r-----  1 ggsuser oinstall 199999384 Jul 24 06:56 aa000307724
-rw-r-----  1 ggsuser oinstall 199999283 Jul 24 06:57 aa000307725
-rw-r-----  1 ggsuser oinstall 199998033 Jul 24 06:59 aa000307726
-rw-r-----  1 ggsuser oinstall 199999199 Jul 24 07:00 aa000307727

July 23, 2019

Check Cluster Resources Where Target != State

Filed under: 12.2,RAC — mdinh @ 3:32 pm

Current version.

[oracle@racnode-dc2-1 patch]$ cat /etc/oratab
#Backup file is  /u01/app/12.2.0.1/grid/srvm/admin/oratab.bak.racnode-dc2-1 line added by Agent
-MGMTDB:/u01/app/12.2.0.1/grid:N
hawk1:/u01/app/oracle/12.2.0.1/db1:N
+ASM1:/u01/app/12.2.0.1/grid:N          # line added by Agent
[oracle@racnode-dc2-1 patch]$

Kill database instance process.

[oracle@racnode-dc2-1 patch]$ ps -ef|grep pmon
oracle   13542     1  0 16:09 ?        00:00:00 asm_pmon_+ASM1
oracle   27663     1  0 16:39 ?        00:00:00 ora_pmon_hawk1
oracle   29401 18930  0 16:40 pts/0    00:00:00 grep --color=auto pmon
[oracle@racnode-dc2-1 patch]$
[oracle@racnode-dc2-1 patch]$ kill -9 27663
[oracle@racnode-dc2-1 patch]$

Check cluster resource – close but no cigar (false positive)

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '(TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc2-1            STABLE
               OFFLINE OFFLINE      racnode-dc2-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      3        OFFLINE OFFLINE                               STABLE
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Check cluster resource – BINGO!

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '(TARGET = ONLINE) and (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Another example:

[oracle@racnode-dc2-1 ~]$ crsctl stat res -t -w '(TARGET = ONLINE) and (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
ora.DATA.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
ora.FRA.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 ~]$

Learned something here.

[oracle@racnode-dc2-1 ~]$ crsctl stat res -v -w 'TYPE = ora.database.type'
NAME=ora.hawk.db
TYPE=ora.database.type
LAST_SERVER=racnode-dc2-1
STATE=ONLINE on racnode-dc2-1
TARGET=ONLINE
CARDINALITY_ID=1
OXR_SECTION=0
RESTART_COUNT=0
***** FAILURE_COUNT=1 
***** FAILURE_HISTORY=1564015051:racnode-dc2-1
ID=ora.hawk.db 1 1
INCARNATION=4
***** LAST_RESTART=07/25/2019 02:39:38
***** LAST_STATE_CHANGE=07/25/2019 02:39:51
STATE_DETAILS=Open,HOME=/u01/app/oracle/12.2.0.1/db1
INTERNAL_STATE=STABLE
TARGET_SERVER=racnode-dc2-1
RESOURCE_GROUP=
INSTANCE_COUNT=2

LAST_SERVER=racnode-dc2-2
STATE=ONLINE on racnode-dc2-2
TARGET=ONLINE
CARDINALITY_ID=2
OXR_SECTION=0
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.hawk.db 2 1
INCARNATION=1
LAST_RESTART=07/25/2019 02:21:45
LAST_STATE_CHANGE=07/25/2019 02:21:45
STATE_DETAILS=Open,HOME=/u01/app/oracle/12.2.0.1/db1
INTERNAL_STATE=STABLE
TARGET_SERVER=racnode-dc2-2
RESOURCE_GROUP=
INSTANCE_COUNT=2

[oracle@racnode-dc2-1 ~]$

Check cluster resource – sanity check.

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '((TARGET = ONLINE) and (STATE != ONLINE)'
[oracle@racnode-dc2-1 patch]$
[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w 'TYPE = ora.database.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc2-1            Open,HOME=/u01/app/o
                                                             racle/12.2.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc2-2            Open,HOME=/u01/app/o
                                                             racle/12.2.0.1/db1,S
                                                             TABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

July 22, 2019

Resize ACFS Volume

Filed under: 12c,ACFS — mdinh @ 6:14 pm

Current Filesystem for ACFS is 299G.

Filesystem             Size  Used Avail Use% Mounted on
/dev/asm/acfs_vol-177  299G  2.6G  248G   2% /ggdata02

Free_MB is 872 which causes paging due to insufficient FREE space from ASM Disk Group ACFS_DATA.

$ asmcmd lsdg -g ACFS_DATA
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  EXTERN  N         512   4096  4194304    307184      872                0             872              0             N  ACFS_DATA/
      2  MOUNTED  EXTERN  N         512   4096  4194304    307184      874                0             872              0             N  ACFS_DATA/

Review attributes for ASM Disk Group ACFS_DATA.

$ asmcmd lsattr -l -G ACFS_DATA
Name                     Value       
access_control.enabled   FALSE       
access_control.umask     066         
au_size                  4194304     
cell.smart_scan_capable  FALSE       
compatible.advm          12.1.0.0.0  
compatible.asm           12.1.0.0.0  
compatible.rdbms         12.1.0.0.0  
content.check            FALSE       
content.type             data        
disk_repair_time         3.6h        
failgroup_repair_time    24.0h       
idp.boundary             auto        
idp.type                 dynamic     
phys_meta_replicated     true        
sector_size              512         
thin_provisioned         FALSE       

Resize /ggdata02 to 250G.

$ acfsutil size 250G /ggdata02
acfsutil size: new file system size: 268435456000 (256000MB)

Review results.

$ asmcmd lsdg -g ACFS_DATA
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  EXTERN  N         512   4096  4194304    307184    51044                0           51044              0             N  ACFS_DATA/
      2  MOUNTED  EXTERN  N         512   4096  4194304    307184    51044                0           51044              0             N  ACFS_DATA/


$ df -h /ggdata02
Filesystem             Size  Used Avail Use% Mounted on
/dev/asm/acfs_vol-177  250G  2.6G  248G   2% /ggdata02

$ asmcmd volinfo --all
Diskgroup Name: ACFS_DATA

	 Volume Name: ACFS_VOL
	 Volume Device: /dev/asm/acfs_vol-177
	 State: ENABLED
	 Size (MB): 256000
	 Resize Unit (MB): 512
	 Redundancy: UNPROT
	 Stripe Columns: 8
	 Stripe Width (K): 1024
	 Usage: ACFS
	 Mountpath: /ggdata02

July 15, 2019

Delete MGMTDB and MGMTLSNR from OEM using emcli

Filed under: emcli — mdinh @ 11:22 pm

Doc ID 1933649.1, MGMTDB and MGMTLSNR should not be monitored.

$ grep oms /etc/oratab 
oms:/u01/middleware/13.2.0:N

$ . oraenv <<< oms

$ emcli login -username=SYSMAN
Enter password : 
Login successful

$ emcli sync
Synchronized successfully

$ emcli get_targets -targets=oracle_listener -format=name:csv|grep -i MGMT
1,Up,oracle_listener,MGMTLSNR_host01

$ emcli delete_target -name="MGMTLSNR_host01" -type="oracle_listener" 
Target "MGMTLSNR_host01:oracle_listener" deleted successfully

$ emcli sync
$ emcli get_targets|grep -i MGMT

Note: MGMTDB was not monitored and can be deleted as follow:

$ emcli get_targets -targets=oracle_database -format=name:csv|grep -i MGMT
$ emcli delete_target -name="MGMTDB_host01" -type="oracle_database" 

The problem with monitoring MGMTDB and MGMTLSNR is getting silly page when they are relocated to a new host.

Host=host01
Target type=Listener 
Target name=MGMTLSNR_host01
Categories=Availability 
Message=The listener is down:

Dealing with the same issue for scan listener and have not reached an agreement to have them deleted as I and a few others think they should not be monitored.
Unfortunately, there is no official Oracle documentation for this.

Here’s a typical page for when all scan listeners are running from only one node.

Host=host01
Target type=Listener
Target name=LISTENER_SCAN2_cluster
Categories=Availability
Message=The listener is down: 

$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node02
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node02

June 28, 2019

DGMGRL Using Help To Learn About New Validate Features

Filed under: 18c,Dataguard,dgmgrl — mdinh @ 3:57 pm

Wouldn’t be nicer and much better if Oracle would add (NF) for new features to help syntax?

DGMGRL for Linux: Release 12.2.0.1.0

[oracle@db-fs-1 bin]$ ./dgmgrl /
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Jun 28 17:49:16 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "orclcdb"
Connected as SYSDG.
DGMGRL> help validate

Performs an exhaustive set of validations for a member

Syntax:

  VALIDATE DATABASE [VERBOSE] <database name>;

  VALIDATE DATABASE [VERBOSE] <database name> DATAFILE <datafile number>
    OUTPUT=<file name>;

  VALIDATE DATABASE [VERBOSE] <database name> SPFILE;

  VALIDATE FAR_SYNC [VERBOSE] <far_sync name>
    [WHEN PRIMARY IS <database name>];

DGMGRL>

DGMGRL for Linux: Release 18.0.0.0.0

[oracle@ADC6160274 GDS]$ dgmgrl /
DGMGRL for Linux: Release 18.0.0.0.0 - Production on Fri Jun 28 15:54:36 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "chi"
Connected as SYSDG.
DGMGRL> help validate

Performs an exhaustive set of validations for a member

Syntax:

  VALIDATE DATABASE [VERBOSE] <database name>;

  VALIDATE DATABASE [VERBOSE] <database name> DATAFILE <datafile number>
    OUTPUT=<file name>;

  VALIDATE DATABASE [VERBOSE] <database name> SPFILE;

  VALIDATE FAR_SYNC [VERBOSE] <far_sync name>
    [WHEN PRIMARY IS <database name>];

  VALIDATE NETWORK CONFIGURATION FOR { ALL | <member name> }; [*** NF ***]

  VALIDATE STATIC CONNECT IDENTIFIER FOR { ALL | <database name> }; [*** NF ***]

DGMGRL>

validate network configuration

DGMGRL> validate network configuration for all;
Connecting to instance "sales" on database "sfo" ...
Connected to "sfo"
Checking connectivity from instance "sales" on database "sfo to instance "sales" on database "chi"...
Succeeded.
Connecting to instance "sales" on database "chi" ...
Connected to "chi"
Checking connectivity from instance "sales" on database "chi to instance "sales" on database "sfo"...
Succeeded.

Oracle Clusterware is not configured on database "sfo".
Connecting to database "sfo" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SLC02PNY.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=sfo_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "sfo".

Oracle Clusterware is not configured on database "chi".
Connecting to database "chi" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ADC6160274.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=chi_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "chi".

validate static connect identifier

DGMGRL> validate static connect identifier for all;
Oracle Clusterware is not configured on database "sfo".
Connecting to database "sfo" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SLC02PNY.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=sfo_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "sfo".

Oracle Clusterware is not configured on database "chi".
Connecting to database "chi" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ADC6160274.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=chi_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "chi".

DGMGRL>
Next Page »

Create a free website or blog at WordPress.com.