Thinking Out Loud

May 5, 2018

DBFS Nightmare

Filed under: 12c,DBFS,GoldenGate — mdinh @ 4:19 pm
====================================================================================================
How to cleanup DBFS tablespace after removing files at DBFS filesystem (Doc ID 2331565.1)	
====================================================================================================
High level overview of the DBMS_DBFS_SFS.REORGANIZEFS procedure:
----------------------------------------------------------------------------------------------------
1) Create a NEW tablespace
2) Create a temporary filesystem with dbms_dbfs_sfs.createFilesystem in the new tablespace.
3) Run dbms_dbfs_sfs.reorganizeFS -->>
   EXEC DBMS_DBFS_SFS.REORGANIZEFS(SRCSTORE=>'FS_FS1', DSTSTORE=>'FS_TMP_FS');
4) The dbfs data is now in the smaller NEW tablespace.
5) Drop the temporay filesystem with dbms_dbfs_sfs.dropFilesystem
6) The OLD original tablespace is empty now.
----------------------------------------------------------------------------------------------------
To reuse the same tablespace again please follow the below steps
----------------------------------------------------------------------------------------------------
7) Create a temporary filesystem with dbms_dbfs_sfs.createFilesystem in the OLD ORIGINAL tablespace.
8) Run dbms_dbfs_sfs.reorganizeFS
9) The dbfs data is now in the smaller the ORIGINAL tablespace.
10) Drop the temporay filesystem with dbms_dbfs_sfs.dropFilesystem
11) The NEW small tablespace is empty now and can be dropped.

====================================================================================================
DBFS tablespace keep growing not using expired blocks (Doc ID 2327299.1)	
====================================================================================================
----------------------------------------------------------------------------------------------------
First perform the DBFS re-org to cleanup the tablespace after removing the files at file system level by following below document
----------------------------------------------------------------------------------------------------
How to cleanup DBFS tablespace after removing files at DBFS filesystem (Doc ID 2331565.1)

----------------------------------------------------------------------------------------------------
After the re-org, set the  below parameter to reuse the expired blocks
----------------------------------------------------------------------------------------------------
ALTER SYSTEM SET "_ENABLE_SPACE_PREALLOCATION" = 0;

====================================================================================================
ORA-1654 - DBFS FREE SPACE NOT RECLAIMED AFTER CLEARING FILES (Doc ID 1948305.1)	
====================================================================================================
----------------------------------------------------------------------------------------------------
a) Check Lob retention setting:
----------------------------------------------------------------------------------------------------
  ex. SQL> select 
  table_name,retention_type,retention_value,retention,securefile from dba_lobs 
  where table_name = 'T_FS1';

----------------------------------------------------------------------------------------------------  
b) Change LOB retention to none.
----------------------------------------------------------------------------------------------------
  ex. SQL> alter table DBFS_USER.T_FS1 modify lob (FILEDATA) (retention none);

After countless discussions, team found GOLD.

12.1 Shrinking and Reorganizing DBFS Filesystems

STEPS:

sqlplus /as sysdba
create bigfile tablespace DBFS_NEW datafile size 8G autoextend on next 1G maxsize 70G;
grant dba to dbfs_user;
alter user dbfs_user default role all;
exit

+++ REORGANIZE DBFS FILESYSTEM FS1 IN TABLESPACE DBFS_TS INTO A NEW TABLESPACE DBFS_NEW, 
+++ USING A TEMPORARY FILESYSTEM NAMED TMP_FS, WHERE ALL FILESYSTEMS BELONG TO DATABASE USER DBFS_USER
cd $ORACLE_HOME/rdbms/admin
sqlplus dbfs_user
@dbfs_create_filesystem DBFS_NEW TMP_FS
exec dbms_dbfs_sfs.reorganizefs('FS1','TMP_FS');
@dbfs_drop_filesystem TMP_FS
purge user_recyclebin;
exit

sqlplus / aa sysdba 
revoke dba from dbfs_user;
select count(*) from dba_extents where tablespace_name='DBFS_TS'; 
select count(*) from dba_extents where tablespace_name='DBFS_NEW'; 
-- DROP TABLESPACE HAVING ZERO EXTENTS
-- BE CAREFUL IN CASE USER AND DBFS ARE USING SAME TABLESPACE
Example:
create user dbfs_user identified by **** default tablespace dbfs_ts quota unlimited on dbfs_ts;
@$ORACLE_HOME/rdbms/admin/dbfs_create_filesystem.sql dbfs_ts FS1

TRUE CONFESSION:

I created an imperfect plan at first that might have saved me from a major catastrophe.

Create DBFS_TMP in a separate Disk Group since I was concern existing may not have enough storage.
Reorg from DBFS_TS to DBFS_TMP (bad idea to name tablespace as DBFS_TMP as LOB Segments migrated here).
Reorg from DBFS_TMP back to DBFS_TS.
Drop tablespace DBFS_TMP.

There’s 29G difference between old and new. How much if this space will be reclaimable due to HWM in data file?

Performing reorg twice is really not an option in prod.

RESULTS:

SEGMENT_NAME                   TABLESPACE_NAME                        MB    EXTENTS
------------------------------ ------------------------------ ---------- ----------
LOB_SFS$_FST_1                 DBFS_TS                        30647.1875       1851
LOB_SFS$_FST_6225924           DBFS_NEW                         1025.125        130

WARNINGS:

This has only been tested in a vacuum (non-prod env w very low activities) – YMMV.

9 Comments »

  1. […] RAC GoldenGate DBFS implementation has been a nightmare and here is one example DBFS Nightmare […]

    Pingback by Solving DBFS UnMounting Issue | Thinking Out Loud — April 24, 2019 @ 11:34 pm | Reply

  2. Hi Michael

    I see you have used extensively DBFS with GoldenGate in RAC systems. Have you used that configuration for a RAC which has several databases and all use GoldenGate? I am configuring 3 databases in a two nodes RAC and all needs GoldenGate (and all database have Data Guard) and I think for each database you need a seperate GoldenGate installation if you pretend use DBFS since one mgr can only write to a dir* directory therefore we need a mgr for each database. If this is true this is a pain because we have to deploy GoldenGate monitor as well, in this case I will have to start 3 jagent processes (3 GoldenGate Monitor instances).

    Am I right?

    Thanks

    Comment by LUIS — October 6, 2019 @ 6:59 pm | Reply

    • Hi Luis,

      Have not used OGG where there are multiple databases on the same system. It will be a challenge to manage but if designed correctly will ease the pain.

      Keep in mind that DBFS will now be SPOF for all 3 OGG. More work to relocate OGG during online patching.

      Some thoughts on design.

      Assign MGR ports, e,g, 7810-7819, 7820-7819, 7830-7839 (or something similar if port range is too small)
      Create OGG Home, e.g. /local/ogg_home, /dbfs/ogg_home (DBFSis only for data). If keep name the same, then easy to know who’s who.
      Create symbolic links to DBFS.

      Check my post on https://mdinh.wordpress.com/2017/11/03/goldengate-naming-convention-p01/

      HTH

      -Michael.

      Comment by mdinh — October 7, 2019 @ 12:43 pm | Reply

  3. Hi Michael

    I was hoping you have worked with OGG with more than one database in same system.

    I think it is not useful use three DBFS filesystems and place all three in a single database because the main reason of using DBFS is that in a Data Guard configuration both database and OGG trails and other OGG files are replicated by the Data Guard. So OGG instance files must be placed in their corresponding database and not in a single database.

    At the end of the day I think DBFS is pretty useless unless you have a OGG installation per database to be replicated.

    Comment by LUIS — October 7, 2019 @ 2:55 pm | Reply

    • Hi Luis,

      One DBFS for all GoldenGate. It will be the same case if you use ACFS or NFS.

      Don’t understand why 3 databases vs 3 schemas in 1 DB.

      DBFS and ACFS both have pros and cons. Just need to chose which evil you want to deal with.

      Good luck.

      -Michael.

      Comment by mdinh — October 7, 2019 @ 9:20 pm | Reply

      • Hi

        i use DBFS if I have Data Guard for the database I am replicating from. If not I would use ACFS which is simpler.

        DBFS is useful when Data Guard is involved because you can replicate both database and OGG files to a remote location, the database will guarantee the OGG files consistency. If you use ACFS replication file consistency is not guaranteed in sync with the database, with DBFS you get the guarantee. Or am I missing something here 😕

        Quote from https://www.oracle.com/technetwork/database/features/availability/maa-wp-gg-oracledbm-128760.pdf

        It is required that you place the DBFS tablespaces inside the database that the Oracle GoldenGate processes are
        connected to. For example, if an Oracle GoldenGate integrated Extract process is extracting from a database called
        ggdb, the DBFS tablespace would be located in the same ggdb database.

        Comment by LUIS — October 7, 2019 @ 10:14 pm

      • “It is required that you place the DBFS tablespaces inside the database that the Oracle GoldenGate processes are connected to.”

        I don’t believe that is an accurate statement because I have performed migration from DBFS to ACFS for GoldenGate on RAC using XAG integration.

        Environment consisted of 2 databases, i.e. DBFS and PROD (GoldenGate replication). After migration to ACFS, DBFS was dropped.

        Comment by mdinh — October 8, 2019 @ 12:23 am

      • Hi

        Of course you can have one DBFS from one database and serve to several OGG extractor/replicator but you already said that it is a SPOF.

        Also if there is Data Guard involved it does not make sense use one DBFS for many OGG, imagine you switchover or failover the DBFS database to the Data Guard, all OGG will stop working so at the end of the day you should have one OGG pointing to one DBFS and that DBFS should be the database you are extracting the data or replicating to.

        Perhaps I have not made clear that Data Guards are involved, if there is no Data Guard ACFS is the best choice. DBFS is only useful when Data Guards are involved but with the limitations I have detailed.

        Thanks

        Comment by LUIS — October 8, 2019 @ 8:32 am


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a free website or blog at WordPress.com.

%d bloggers like this: