Thinking Out Loud

July 24, 2019

Rsync DBFS To ACFS For GoldenGate Trail Migration

Filed under: GoldenGate,shell scripting — mdinh @ 2:25 pm

Planning to move GoldenGate trail files from DBFS to ACFS.

This is pre-work before actual migration to stress IO for ACFS.

Learned some cron along the way.

# Run every 2 hours at even hours
0 */2 * * * /home/oracle/working/dinh/acfs_ggdata02_rsync.sh > /tmp/rsync_acfs_ggdata_to_ggdata02.log 2>&1

# Run every 2 hours at odd hours
0 1-23/2 * * * /home/oracle/working/dinh/acfs_ggdata02_rsync.sh > /tmp/rsync_acfs_ggdata_to_ggdata02.log 2>&1

Syntax and ouptput.

+ /bin/rsync -vrpogt --delete-after /DBFS/ggdata/ /ACFS/ggdata
building file list ... done

dirchk/E_SOURCE.cpe
dirchk/P_TARGET.cpe

dirdat/
dirdat/aa000307647
dirdat/aa000307648
.....
dirdat/aa000307726
dirdat/aa000307727

deleting dirdat/aa000306741
deleting dirdat/aa000306740
.....
deleting dirdat/aa000306662
deleting dirdat/aa000306661

sent 16,205,328,959 bytes  received 1,743 bytes  140,305,893.52 bytes/sec
total size is 203,021,110,174  speedup is 12.53

real	1m56.671s
user	1m24.643s
sys	0m45.875s

+ '[' 0 '!=' 0 ']'

+ /bin/diff -rq /DBFS/ggdata /ACFS/ggdata

Files /DBFS/ggdata/dirchk/E_SOURCE.cpe and /ACFS/ggdata/dirchk/E_SOURCE.cpe differ
Files /DBFS/ggdata/dirchk/P_TARGET.cpe and /ACFS/ggdata/dirchk/P_TARGET.cpe differ

Only in /ACFS/ggdata/dirdat: aa000306742
Only in /ACFS/ggdata/dirdat: aa000306743
Only in /ACFS/ggdata/dirdat: aa000306744
Only in /ACFS/ggdata/dirdat: aa000306745

Only in /DBFS/ggdata/dirdat: aa000307728
Only in /DBFS/ggdata/dirdat: aa000307729

real	69m15.207s
user	2m9.242s
sys	17m3.846s

+ ls /DBFS/ggdata/dirdat/
+ wc -l
975

+ ls -alrt /DBFS/ggdata/dirdat/
+ head
total 190631492
drwxrwxrwx 24 root    root             0 Feb  9  2018 ..
-rw-r-----  1 ggsuser oinstall 199999285 Mar  8  2018 .fuse_hidden001a3c47000001c5
-rw-r-----  1 ggsuser oinstall 199999896 May 23 00:23 .fuse_hidden000002b500000001
-rw-r-----  1 ggsuser oinstall 199999934 Jul 23 06:11 aa000306798
-rw-r-----  1 ggsuser oinstall 199999194 Jul 23 06:13 aa000306799
-rw-r-----  1 ggsuser oinstall 199999387 Jul 23 06:14 aa000306800
-rw-r-----  1 ggsuser oinstall 199999122 Jul 23 06:16 aa000306801
-rw-r-----  1 ggsuser oinstall 199999172 Jul 23 06:19 aa000306802
-rw-r-----  1 ggsuser oinstall 199999288 Jul 23 06:19 aa000306803

+ ls -alrt /DBFS/ggdata/dirdat/
+ tail
-rw-r-----  1 ggsuser oinstall 199999671 Jul 24 07:59 aa000307764
-rw-r-----  1 ggsuser oinstall 199999645 Jul 24 08:01 aa000307765
-rw-r-----  1 ggsuser oinstall 199998829 Jul 24 08:02 aa000307766
-rw-r-----  1 ggsuser oinstall 199998895 Jul 24 08:04 aa000307767
-rw-r-----  1 ggsuser oinstall 199999655 Jul 24 08:05 aa000307768
-rw-r-----  1 ggsuser oinstall 199999930 Jul 24 08:07 aa000307769
-rw-r-----  1 ggsuser oinstall 199999761 Jul 24 08:09 aa000307770
-rw-r-----  1 ggsuser oinstall 199999421 Jul 24 08:11 aa000307771
-rw-r-----  1 ggsuser oinstall   7109055 Jul 24 08:11 aa000307772

+ ls /ACFS/ggdata/dirdat/
+ wc -l
986

+ ls -alrt /ACFS/ggdata/dirdat/
+ head
total 194779104
drwxrwxrwx 24 root    root          8192 Feb  9  2018 ..
-rw-r-----  1 ggsuser oinstall 199999285 Mar  8  2018 .fuse_hidden001a3c47000001c5
-rw-r-----  1 ggsuser oinstall 199999896 May 23 00:23 .fuse_hidden000002b500000001
-rw-r-----  1 ggsuser oinstall 199998453 Jul 23 04:55 aa000306742
-rw-r-----  1 ggsuser oinstall 199999657 Jul 23 04:56 aa000306743
-rw-r-----  1 ggsuser oinstall 199999227 Jul 23 04:57 aa000306744
-rw-r-----  1 ggsuser oinstall 199999389 Jul 23 04:59 aa000306745
-rw-r-----  1 ggsuser oinstall 199999392 Jul 23 05:00 aa000306746
-rw-r-----  1 ggsuser oinstall 199999116 Jul 23 05:01 aa000306747

+ ls -alrt /ACFS/ggdata/dirdat/
+ tail
-rw-r-----  1 ggsuser oinstall 199999876 Jul 24 06:48 aa000307719
-rw-r-----  1 ggsuser oinstall 199999751 Jul 24 06:50 aa000307720
-rw-r-----  1 ggsuser oinstall 199999918 Jul 24 06:51 aa000307721
-rw-r-----  1 ggsuser oinstall 199999404 Jul 24 06:52 aa000307722
-rw-r-----  1 ggsuser oinstall 199999964 Jul 24 06:54 aa000307723
-rw-r-----  1 ggsuser oinstall 199999384 Jul 24 06:56 aa000307724
-rw-r-----  1 ggsuser oinstall 199999283 Jul 24 06:57 aa000307725
-rw-r-----  1 ggsuser oinstall 199998033 Jul 24 06:59 aa000307726
-rw-r-----  1 ggsuser oinstall 199999199 Jul 24 07:00 aa000307727
Advertisements

July 23, 2019

Check Cluster Resources Where Target != State

Filed under: 12.2,RAC — mdinh @ 3:32 pm

Current version.

[oracle@racnode-dc2-1 patch]$ cat /etc/oratab
#Backup file is  /u01/app/12.2.0.1/grid/srvm/admin/oratab.bak.racnode-dc2-1 line added by Agent
-MGMTDB:/u01/app/12.2.0.1/grid:N
hawk1:/u01/app/oracle/12.2.0.1/db1:N
+ASM1:/u01/app/12.2.0.1/grid:N          # line added by Agent
[oracle@racnode-dc2-1 patch]$

Kill database instance process.

[oracle@racnode-dc2-1 patch]$ ps -ef|grep pmon
oracle   13542     1  0 16:09 ?        00:00:00 asm_pmon_+ASM1
oracle   27663     1  0 16:39 ?        00:00:00 ora_pmon_hawk1
oracle   29401 18930  0 16:40 pts/0    00:00:00 grep --color=auto pmon
[oracle@racnode-dc2-1 patch]$
[oracle@racnode-dc2-1 patch]$ kill -9 27663
[oracle@racnode-dc2-1 patch]$

Check cluster resource – close but no cigar (false positive)

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '(TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc2-1            STABLE
               OFFLINE OFFLINE      racnode-dc2-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      3        OFFLINE OFFLINE                               STABLE
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Check cluster resource – BINGO!

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '(TARGET = ONLINE) and (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Another example:

[oracle@racnode-dc2-1 ~]$ crsctl stat res -t -w '(TARGET = ONLINE) and (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
ora.DATA.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
ora.FRA.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 ~]$

Learned something here.

[oracle@racnode-dc2-1 ~]$ crsctl stat res -v -w 'TYPE = ora.database.type'
NAME=ora.hawk.db
TYPE=ora.database.type
LAST_SERVER=racnode-dc2-1
STATE=ONLINE on racnode-dc2-1
TARGET=ONLINE
CARDINALITY_ID=1
OXR_SECTION=0
RESTART_COUNT=0
***** FAILURE_COUNT=1 
***** FAILURE_HISTORY=1564015051:racnode-dc2-1
ID=ora.hawk.db 1 1
INCARNATION=4
***** LAST_RESTART=07/25/2019 02:39:38
***** LAST_STATE_CHANGE=07/25/2019 02:39:51
STATE_DETAILS=Open,HOME=/u01/app/oracle/12.2.0.1/db1
INTERNAL_STATE=STABLE
TARGET_SERVER=racnode-dc2-1
RESOURCE_GROUP=
INSTANCE_COUNT=2

LAST_SERVER=racnode-dc2-2
STATE=ONLINE on racnode-dc2-2
TARGET=ONLINE
CARDINALITY_ID=2
OXR_SECTION=0
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.hawk.db 2 1
INCARNATION=1
LAST_RESTART=07/25/2019 02:21:45
LAST_STATE_CHANGE=07/25/2019 02:21:45
STATE_DETAILS=Open,HOME=/u01/app/oracle/12.2.0.1/db1
INTERNAL_STATE=STABLE
TARGET_SERVER=racnode-dc2-2
RESOURCE_GROUP=
INSTANCE_COUNT=2

[oracle@racnode-dc2-1 ~]$

Check cluster resource – sanity check.

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '((TARGET = ONLINE) and (STATE != ONLINE)'
[oracle@racnode-dc2-1 patch]$
[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w 'TYPE = ora.database.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc2-1            Open,HOME=/u01/app/o
                                                             racle/12.2.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc2-2            Open,HOME=/u01/app/o
                                                             racle/12.2.0.1/db1,S
                                                             TABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

July 22, 2019

Resize ACFS Volume

Filed under: 12c,ACFS — mdinh @ 6:14 pm

Current Filesystem for ACFS is 299G.

Filesystem             Size  Used Avail Use% Mounted on
/dev/asm/acfs_vol-177  299G  2.6G  248G   2% /ggdata02

Free_MB is 872 which causes paging due to insufficient FREE space from ASM Disk Group ACFS_DATA.

$ asmcmd lsdg -g ACFS_DATA
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  EXTERN  N         512   4096  4194304    307184      872                0             872              0             N  ACFS_DATA/
      2  MOUNTED  EXTERN  N         512   4096  4194304    307184      874                0             872              0             N  ACFS_DATA/

Review attributes for ASM Disk Group ACFS_DATA.

$ asmcmd lsattr -l -G ACFS_DATA
Name                     Value       
access_control.enabled   FALSE       
access_control.umask     066         
au_size                  4194304     
cell.smart_scan_capable  FALSE       
compatible.advm          12.1.0.0.0  
compatible.asm           12.1.0.0.0  
compatible.rdbms         12.1.0.0.0  
content.check            FALSE       
content.type             data        
disk_repair_time         3.6h        
failgroup_repair_time    24.0h       
idp.boundary             auto        
idp.type                 dynamic     
phys_meta_replicated     true        
sector_size              512         
thin_provisioned         FALSE       

Resize /ggdata02 to 250G.

$ acfsutil size 250G /ggdata02
acfsutil size: new file system size: 268435456000 (256000MB)

Review results.

$ asmcmd lsdg -g ACFS_DATA
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  EXTERN  N         512   4096  4194304    307184    51044                0           51044              0             N  ACFS_DATA/
      2  MOUNTED  EXTERN  N         512   4096  4194304    307184    51044                0           51044              0             N  ACFS_DATA/


$ df -h /ggdata02
Filesystem             Size  Used Avail Use% Mounted on
/dev/asm/acfs_vol-177  250G  2.6G  248G   2% /ggdata02

$ asmcmd volinfo --all
Diskgroup Name: ACFS_DATA

	 Volume Name: ACFS_VOL
	 Volume Device: /dev/asm/acfs_vol-177
	 State: ENABLED
	 Size (MB): 256000
	 Resize Unit (MB): 512
	 Redundancy: UNPROT
	 Stripe Columns: 8
	 Stripe Width (K): 1024
	 Usage: ACFS
	 Mountpath: /ggdata02

July 15, 2019

Delete MGMTDB and MGMTLSNR from OEM using emcli

Filed under: emcli — mdinh @ 11:22 pm

Doc ID 1933649.1, MGMTDB and MGMTLSNR should not be monitored.

$ grep oms /etc/oratab 
oms:/u01/middleware/13.2.0:N

$ . oraenv <<< oms

$ emcli login -username=SYSMAN
Enter password : 
Login successful

$ emcli sync
Synchronized successfully

$ emcli get_targets -targets=oracle_listener -format=name:csv|grep -i MGMT
1,Up,oracle_listener,MGMTLSNR_host01

$ emcli delete_target -name="MGMTLSNR_host01" -type="oracle_listener" 
Target "MGMTLSNR_host01:oracle_listener" deleted successfully

$ emcli sync
$ emcli get_targets|grep -i MGMT

Note: MGMTDB was not monitored and can be deleted as follow:

$ emcli get_targets -targets=oracle_database -format=name:csv|grep -i MGMT
$ emcli delete_target -name="MGMTDB_host01" -type="oracle_database" 

The problem with monitoring MGMTDB and MGMTLSNR is getting silly page when they are relocated to a new host.

Host=host01
Target type=Listener 
Target name=MGMTLSNR_host01
Categories=Availability 
Message=The listener is down:

Dealing with the same issue for scan listener and have not reached an agreement to have them deleted as I and a few others think they should not be monitored.
Unfortunately, there is no official Oracle documentation for this.

Here’s a typical page for when all scan listeners are running from only one node.

Host=host01
Target type=Listener
Target name=LISTENER_SCAN2_cluster
Categories=Availability
Message=The listener is down: 

$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node02
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node02

Blog at WordPress.com.