Thinking Out Loud

December 29, 2012

Poor Performance Compression Bug

Filed under: 11g,performance — mdinh @ 5:25 am

I was really puzzled why performance for compressed tables was horrendous.

My knowledge of the environment enabled me to make a hypothesis on the cause for performance degradation-db_block_checking.

I agree with Martin Berger the finding was generally indirect and not sure how someone else would have been able to identify the issue.

Even Oracle Support suggested the only hint is increase CPU. However, there can be many factors affecting CPU, e.g. nested loop.

The database version is 11.2.0.3 running on Sparc Solaris 10 64bit

Setup:

FACT table COMPRESS FOR OLTP with DEFAULT PCTFREE, INITTRANS settings (53 cols NOT NULL)
PK (1 col)
AK (3 col)
IX (1 col)
33,179,386 rows

STAGING table with DEFAULT PCTFREE, INITTRANS settings (52 cols, 1 col NOT NULL, remaining NULL)
No index/constraints
911,861 rows

Pseudo Code:

MERGE INTO FACT f USING STAGING s ON (f.KEY=s.KEY)
WHEN MATCHED THEN UPDATE SET … 52 columns

SQL Script:

set timing on serveroutput on
spool testcomp.log

exec runstats_pkg.rs_start;
commit;

alter session force parallel dml parallel 8;
@/oracle/sqlt/input/sample/oltp_false.sql
alter session disable parallel dml;

exec runstats_pkg.rs_middle;
commit;

alter session force parallel dml parallel 8;
@/oracle/sqlt/input/sample/oltp_medium.sql
alter session disable parallel dml;

exec runstats_pkg.rs_stop(0,’WORKLOAD’);
exec runstats_pkg.rs_stop(1000000);

alter system set db_block_checking=FALSE;
spool off

SQL Execution Snipit:

SQL> alter system set db_block_checking=FALSE;
System altered.
Elapsed: 00:00:00.01

SQL> set echo off
911861 rows merged.
Elapsed: 00:01:52.30
Commit complete.

SQL> alter system set db_block_checking=MEDIUM;
System altered.
Elapsed: 00:00:00.01

SQL> set echo off
911861 rows merged.
Elapsed: 00:23:44.38
Commit complete.

Run1 ran in 11483 hsecs  - db_block_checking=FALSE
Run2 ran in 142483 hsecs - db_block_checking=MEDIUM
run 1 ran in 8.06% of the time

Name                                                Run1            Run2        Diff
STAT...physical reads direct                     517,192         517,208              16
STAT...physical reads                            522,167         517,224          -4,943
STAT...physical reads cache                        4,975              16          -4,959
STAT...Elapsed Time                               11,487         142,487         131,000
STAT...recursive cpu usage                        34,909         498,166         463,257
STAT...CPU used by this session                   35,572         498,920         463,348
STAT...redo size                             889,907,464     892,430,032       2,522,568
STAT...session logical reads                   3,480,123       6,557,752       3,077,629

Run1 latches total versus runs -- difference and pct
Run1            Run2            Diff       Pct
20,508,977      68,940,229      48,431,252     29.75%
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.08

Run1 ran in 11483 hsecs  - db_block_checking=FALSE
Run2 ran in 142491 hsecs - db_block_checking=MEDIUM
run 1 ran in 8.06% of the time

Name                                                Run1            Run2        Diff
STAT...session uga memory max                    130,976       1,293,104       1,162,128
STAT...in call idle wait time                    115,158       1,409,641       1,294,483
LATCH.simulator hash latch                       247,055       1,556,432       1,309,377
LATCH.object queue header operation              205,535       1,766,312       1,560,777
STAT...session pga memory                        851,968        -720,896      -1,572,864
LATCH.transaction allocation                     108,304       1,936,419       1,828,115
STAT...DB time                                   189,534       2,196,702       2,007,168
STAT...redo size                             889,907,464     892,433,832       2,526,368
LATCH.checkpoint queue latch                      90,178       2,757,576       2,667,398
STAT...session logical reads                   3,480,123       6,557,897       3,077,774
STAT...consistent gets                           838,019       3,926,800       3,088,781
STAT...consistent gets from cache                 20,153       3,109,037       3,088,884
STAT...consistent gets - examination              11,341       3,101,011       3,089,670
STAT...data blocks consistent reads -
       undo records applied                            0       3,091,383       3,091,383
STAT...Effective IO time                       8,124,867       4,359,804      -3,765,063
LATCH.DML lock allocation                        450,593       5,815,377       5,364,784
STAT...file io wait time                      51,133,453      44,027,178      -7,106,275
LATCH.cache buffers chains                    14,751,617      45,136,022      30,384,405
STAT...physical read bytes                 8,555,184,128   8,474,198,016     -80,986,112
STAT...physical read total bytes           8,555,184,128   8,474,198,016     -80,986,112
STAT...cell physical IO interconnect byt   8,555,184,128   8,474,198,016     -80,986,112
STAT...logical read bytes from cache      43,618,418,688  94,044,405,760  50,425,987,072

Run1 latches total versus runs -- difference and pct
Run1            Run2            Diff       Pct
41,017,954     137,883,083      96,865,129     29.75%

Run1 - blk_checking_false
Disk Reads 522,167
Buffer Gets 3,479,799  
98% CPU Wait 1:60 (mm:ss)

Run2 - blk_checking_medium
Disk Reads 517,116
Buffer Gets 6,547,223
90% CPU Wait 21:08 (mm:ss)

Test shows db_block_checking and compression are bad for performance due to a bug.

Unfortunately, from the all the metrics I have looked at, there’s is not a direct indication to the cause.

Open to ideas as what to look for or what analysis can be performed.

I have used XTRACT, XECUTE, COMPARE, TRCANLZR from SQLTXPLAIN (SQLT) 11.4.5.2

References:

Notes with explanations for OLTP compression, db_block_checking, db_lost_write_protect.

Best Practices for Corruption Detection, Prevention, and Automatic Repair – in a Data Guard Configuration [ID 1302539.1]
Master Note for OLTP Compression [ID 1223705.1]

Bug 12711341: EXTREME SLOWNESS WITH DB_ULTRA_SAFE ON COMPRESSED TABLES

Customer is not happy with the performance of DML after enabling COMPRESS option and turning DB_BLOCK_CHECKING = FULL.

Bug 8931846: DB_BLOCK_CHECKING CAUSES LARGE PERFORMANCE HIT WITH COMPRESSED TABLE

December 18, 2012

Shell script – checking for NULL & RMAN backup

Filed under: 11g,RMAN,shell scripting,solaris — mdinh @ 2:13 pm

The existing backup strategy for archivelog is Disk-to-Disk-to-Tape (D2D2T), using RMAN for disk and NetBackup for tape.

Delete existing backup after successfully back up to tape.

Don’t know enough about NetBackup or whether it can do the deletion, and that’s a story for another day.

Here is the scenario: the file exist but contains NULL data, will the RMAN backup be deleted?

File exists:

> ll /home/oracle/check_backup/check_bkup.out
-rw-r--r--   1 mdinh    dba            0 Dec 15 13:15 /home/oracle/check_backup/check_bkup.out

NULL data: the file is  ZERO byte.

Script:

> cat t.sh
#!/bin/sh

DN=`dirname $0`
BN=`basename $0`
SID=${1:?"---> USAGE: $DN/$BN <ORACLE_SID>"}

date
FILE=/home/oracle/check_backup/check_bkup.out
LAST=`head /home/oracle/check_backup/check_bkup.out |grep Archive|head -1|awk '{print $1}'`
# Check for ora_smon process
if [ `pgrep -lf ora_smon_$1|awk -F_ '{print $3}'` != $1 ]; then
exit 1
elif [ `pgrep -lf "$BN $1"|wc -l` -gt 2 ]; then
exit 1
elif [ ! -f $FILE ]; then
mailx -s "Missing $FILE" maile@me.com < /dev/null
exit 1
fi

echo ---------- NO NULL check ----------
echo "delete noprompt backup completed before \"to_date('${LAST}','mm/dd/yyyy')\";"

# if $LAST IS NOT NULL
if [ ! -z "$LAST" ]; then
echo "delete noprompt backup completed before \"to_date('${LAST}','mm/dd/yyyy')\";"
fi
exit

Execution:

> ./t.sh db01
Tue Dec 18 05:31:20 PST 2012
---------- NO NULL check ----------
delete noprompt backup completed before "to_date('','mm/dd/yyyy')";

Curiosity: What will happen when delete noprompt backup is executed? Any guess?

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

RMAN> list backup;
using target database control file instead of recovery catalog
specification does not match any backup in the repository

RMAN> backup archivelog from sequence 157;

Starting backup at 18-DEC-2012 05:55:20
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=102 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=157 RECID=271 STAMP=802331584
input archived log thread=1 sequence=158 RECID=272 STAMP=802331720
channel ORA_DISK_1: starting piece 1 at 18-DEC-2012 05:55:21
channel ORA_DISK_1: finished piece 1 at 18-DEC-2012 05:55:25
piece handle=/oracle/oradata/flashrecovery/SAN_DBTEST/backupset/2012_12_18/o1_mf_annnn_TAG20121218T055521_8f0xgb07_.bkp tag=TAG20121218T055521 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04
Finished backup at 18-DEC-2012 05:55:25

RMAN> list backup summary;
List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
1       B  A  A DISK        18-DEC-2012 05:55:24 1       1       NO         TAG20121218T055521

RMAN> delete noprompt backup completed before "to_date('','mm/dd/yyyy')";
using channel ORA_DISK_1

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
1       1       1   1   AVAILABLE   DISK        /oracle/oradata/flashrecovery/SAN_DBTEST/backupset/2012_12_18/o1_mf_annnn_TAG20121218T055521_8f0xgb07_.bkp
deleted backup piece
backup piece handle=/oracle/oradata/flashrecovery/SAN_DBTEST/backupset/2012_12_18/o1_mf_annnn_TAG20121218T055521_8f0xgb07_.bkp RECID=1 STAMP=802331722
Deleted 1 objects

RMAN>

December 15, 2012

In Line View Performance Over SORT MERGE JOIN

Filed under: Uncategorized — mdinh @ 6:08 pm
I am ecstatic the formatting does not look like garbage.
The reporting was designed to allow users to select a pre-calculated date which is refreshed each day against the current date using materialized view. 

Example:
Interval: DAILY
Date:     3 years from Current Date
SQL> SELECT t1.day_id FROM DYNAMIC_TIME_VIEW t1 WHERE t1.TIME_POINT_NAME='DAILY-CURR_DT_MINUS_36_MONTHS';
DAY_ID
---------
15-DEC-09

Interval: DAILY
Date:     Current Date
SQL> SELECT t2.day_id FROM DYNAMIC_TIME_VIEW t2 WHERE t2.TIME_POINT_NAME='DAILY-CURR_DT';
DAY_ID
---------
15-DEC-12

SQL>

Users would select from LOV the 2 dates which will be used to filtered the data.
Here is the explain plan with TABLE JOIN. Performance was horrendous with SORT MERGE JOIN.
The 2 ^SORT JOIN ^MERGE and 2 %SORT JOIN %MERGE

explain plan for 
SELECT a.dt 
FROM   bigtable a, DYNAMIC_TIME_VIEW t1, DYNAMIC_TIME_VIEW t2
WHERE  t1.TIME_POINT_NAME = 'DAILY-CURR_DT_MINUS_36_MONTHS'
AND    t2.TIME_POINT_NAME = 'DAILY-CURR_DT' 
AND    a.dt BETWEEN t1.DAY_ID AND t2.DAY_ID
;

select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
Plan hash value: 1569934958
------------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                          |  3432M|    83G| 75712  (50)| 00:00:03 |
|   1 |  %MERGE JOIN                          |                          |  3432M|    83G| 75712  (50)| 00:00:03 |
|   2 |   %SORT JOIN                          |                          |    80M|  1300M| 39527   (4)| 00:00:02 |
|   3 |    ^MERGE JOIN                        |                          |    80M|  1300M| 39527   (4)| 00:00:02 |
|   4 |     ^SORT JOIN                        |                          |    36M|   278M| 39048   (3)| 00:00:02 |
|   5 |       INDEX FULL SCAN                 | INK_bigtable_dt          |    36M|   278M| 39048   (3)| 00:00:02 |
|*  6 |     ^SORT JOIN                        |                          |    45 |   405 |     7  (15)| 00:00:01 |
|   7 |       VIEW                            | DYNAMIC_TIME_VIEW        |    45 |   405 |     6   (0)| 00:00:01 |
|   8 |        UNION-ALL                      |                          |       |       |            |          |
|   9 |         MAT_VIEW ACCESS BY INDEX ROWID| DYNAMIC_TIME_POINT_MV    |     1 |    39 |     1   (0)| 00:00:01 |
|* 10 |          INDEX UNIQUE SCAN            | UX$DYNAMIC_TIME_POINT_MV |     1 |       |     0   (0)| 00:00:01 |
|* 11 |         INDEX FAST FULL SCAN          | PK_DAY_FORMATTED_MVIEW   |    44 |   352 |     5   (0)| 00:00:01 |
|* 12 |   %SORT JOIN                          |                          |    45 |   405 |     7  (15)| 00:00:01 |
|  13 |     VIEW                              | DYNAMIC_TIME_VIEW        |    45 |   405 |     6   (0)| 00:00:01 |
|  14 |      UNION-ALL                        |                          |       |       |            |          |
|  15 |       MAT_VIEW ACCESS BY INDEX ROWID  | DYNAMIC_TIME_POINT_MV    |     1 |    39 |     1   (0)| 00:00:01 |
|* 16 |        INDEX UNIQUE SCAN              | UX$DYNAMIC_TIME_POINT_MV |     1 |       |     0   (0)| 00:00:01 |
|* 17 |       INDEX FAST FULL SCAN            | PK_DAY_FORMATTED_MVIEW   |    44 |   352 |     5   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access(INTERNAL_FUNCTION("A"."dt")>=INTERNAL_FUNCTION("T1"."DAY_ID"))
       filter(INTERNAL_FUNCTION("A"."dt")>=INTERNAL_FUNCTION("T1"."DAY_ID"))
  10 - access("TIME_POINT_NAME"='DAILY-CURR_DT_MINUS_36_MONTHS')
  11 - filter(TO_CHAR(INTERNAL_FUNCTION("DAY_ID"),'mm/dd/yyyy')='DAILY-CURR_DT_MINUS_36_MONTHS')
  12 - access("A"."dt"<="T2"."DAY_ID")
       filter("A"."dt"<="T2"."DAY_ID")   
  16 - access("TIME_POINT_NAME"='DAILY-CURR_DT')   
  17 - filter(TO_CHAR(INTERNAL_FUNCTION("DAY_ID"),'mm/dd/yyyy')='DAILY-CURR_DT') 

36 rows selected. 

Here is the explain plan with INLINE VIEW.

explain plan for  
SELECT a.dt  
FROM   bigtable a 
WHERE  dt  
BETWEEN 
(SELECT t1.day_id FROM DYNAMIC_TIME_VIEW t1 WHERE t1.TIME_POINT_NAME='DAILY-CURR_DT_MINUS_36_MONTHS') 
AND     
(SELECT t2.day_id FROM DYNAMIC_TIME_VIEW t2 WHERE t2.TIME_POINT_NAME = 'DAILY-CURR_DT')
; 

select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------
Plan hash value: 2375256111
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                          | 91238 |   712K|   193   (3)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN                 | INK_bigtable_dt          | 91238 |   712K|   181   (3)| 00:00:01 |
|   2 |   VIEW                            | DYNAMIC_TIME_VIEW        |    45 |  1395 |     6   (0)| 00:00:01 |
|   3 |    UNION-ALL                      |                          |       |       |            |          |
|   4 |     MAT_VIEW ACCESS BY INDEX ROWID| DYNAMIC_TIME_POINT_MV    |     1 |    39 |     1   (0)| 00:00:01 |
|*  5 |      INDEX UNIQUE SCAN            | UX$DYNAMIC_TIME_POINT_MV |     1 |       |     0   (0)| 00:00:01 |
|*  6 |     INDEX FAST FULL SCAN          | PK_DAY_FORMATTED_MVIEW   |    44 |   352 |     5   (0)| 00:00:01 |
|   7 |   VIEW                            | DYNAMIC_TIME_VIEW        |    45 |   405 |     6   (0)| 00:00:01 |
|   8 |    UNION-ALL                      |                          |       |       |            |          |
|   9 |     MAT_VIEW ACCESS BY INDEX ROWID| DYNAMIC_TIME_POINT_MV    |     1 |    39 |     1   (0)| 00:00:01 |
|* 10 |      INDEX UNIQUE SCAN            | UX$DYNAMIC_TIME_POINT_MV |     1 |       |     0   (0)| 00:00:01 |
|* 11 |     INDEX FAST FULL SCAN          | PK_DAY_FORMATTED_MVIEW   |    44 |   352 |     5   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("dt">= (SELECT "T1"."DAY_ID" FROM  ( (SELECT "TIME_POINT_NAME"
            "TIME_POINT_NAME","DAY_ID" "DAY_ID" FROM "DYNAMIC_TIME_POINT_MV" "DYNAMIC_TIME_POINT_MV"
             WHERE "TIME_POINT_NAME"='DAILY-CURR_DT_MINUS_36_MONTHS') UNION ALL  (SELECT
             TO_CHAR(INTERNAL_FUNCTION("DAY_ID"),'mm/dd/yyyy') "TIME_POINT_NAME","DAY_ID" "DAY_ID" FROM
             "DAY_FORMATTED_MVIEW" "DAY_FORMATTED_MVIEW" WHERE
             TO_CHAR(INTERNAL_FUNCTION("DAY_ID"),'mm/dd/yyyy')='DAILY-CURR_DT_MINUS_36_MONTHS')) "T1") AND "dt"<= 
             (SELECT "T2"."DAY_ID" FROM  ( (SELECT "DAY_ID" "DAY_ID" FROM "DYNAMIC_TIME_POINT_MV"     
             "DYNAMIC_TIME_POINT_MV" WHERE "TIME_POINT_NAME"='DAILY-CURR_DT') UNION ALL  (SELECT "DAY_ID" "DAY_ID"
             FROM "DAY_FORMATTED_MVIEW" "DAY_FORMATTED_MVIEW" WHERE
             TO_CHAR(INTERNAL_FUNCTION("DAY_ID"),'mm/dd/yyyy')='DAILY-CURR_DT')) "T2"))    
   5 - access("TIME_POINT_NAME"='DAILY-CURR_DT_MINUS_36_MONTHS')    
   6 - filter(TO_CHAR(INTERNAL_FUNCTION("DAY_ID"),'mm/dd/yyyy')='DAILY-CURR_DT_MINUS_36_MONTHS')   
  10 - access("TIME_POINT_NAME"='DAILY-CURR_DT')   
  11 - filter(TO_CHAR(INTERNAL_FUNCTION("DAY_ID"),'mm/dd/yyyy')='DAILY-CURR_DT') 36 rows selected. 

Noticed how Oracle was able to access bigtable with the range from the inline view.

December 14, 2012

Motivation

Filed under: Uncategorized — mdinh @ 2:30 pm

On Thursday, a software engineer came to me to discuss issues with SQL and data analysis. My suggestion, “I think PIVOT and UNPIVOT might do the job for what you are looking for.” He went back to his desk and tried it. Minutes later, he came back excited letting me know how great it was and he never knew about it. And again, minutes later he then came back and was ecstatic on the progress he was making and wanted to complete the project, but going on vacation. I joked with him, “Bring your laptop to ski resort.” :=)

Sharing your success and excitement motivates me to help you.

December 8, 2012

Bad header found during kcvxfh for active db duplication

Filed under: 11g,Dataguard,RMAN — mdinh @ 4:19 pm

While performing active database duplication to create physical standby database using RMAN, I have encountered error:  Bad header found during kcvxfh v8.

Thought to self, this does not look good. Is there corruption at the primary database? For good measures, I wanted to verify the primary using:

CONFIGURE CHANNEL DEVICE TYPE DISK MAXOPENFILES 1;
CONFIGURE DEVICE TYPE DISK PARALLELISM 8 BACKUP TYPE TO BACKUPSET;
backup validate check logical database filesperset 1;

No corruption – sign of relief.

The following note was found from performing further research –

Corrupt Block, Bad Header Found Message during Rman Restore On Raw device [ID 1399327.1]

Here is an example of the output from the Note:

Hex dump of (file 37, block 1) in trace file /u01/oracle/product/diag/rdbms/test/trace/test_ora_1155088.trc
Corrupt block relative dba: 0x09400001 (file 37, block 1)
Bad header found during kcvxfh v8
Data in bad block:
type: 0 format: 2 rdba: 0x09400001
last change scn: 0x0000.00000000 seq: 0x1 flg: 0x05
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0x00000001
check value in block header: 0x8e7
computed block checksum: 0x0
Reading datafile '+DATA/test//datafile/apps_ts_tx_data.295.741794537' for corruption at rdba: 0x09400001 (file 37, block 1)
Reread (file 37, block 1) found same corrupt data

Here is an example from alert log output:

Thu Dec 06 14:24:04 2012
db_recovery_file_dest_size of 102400 MB is 0.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Thu Dec 06 14:25:27 2012
Hex dump of (file 42, block 1) in trace file /u01/app/oracle/diag/rdbms/db07/db07/trace/db07_ora_21376.trc
Corrupt block relative dba: 0x0a800001 (file 42, block 1)
Bad header found during kcvxfh v8
Data in bad block:
 type: 0 format: 2 rdba: 0x0a800001
 last change scn: 0x0000.00000000 seq: 0x1 flg: 0x05
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x00000001
 check value in block header: 0xb27
 computed block checksum: 0x0
Reading datafile '/oracle/oradata/db01/data01.dbf' for corruption at rdba: 0x0a800001 (file 42, block 1)
Reread (file 42, block 1) found same corrupt data (no logical check)
Hex dump of (file 49, block 1) in trace file /u01/app/oracle/diag/rdbms/db01/db01/trace/db01_ora_21376.trc
Corrupt block relative dba: 0x0c400001 (file 49, block 1)
Bad header found during kcvxfh v8
Data in bad block:
 type: 0 format: 2 rdba: 0x0c400001
 last change scn: 0x0000.00000000 seq: 0x1 flg: 0x05
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x00000001
 check value in block header: 0xde7
 computed block checksum: 0x0
Reading datafile '/oracle/oradata/db01/index01.dbf' for corruption at rdba: 0x0c400001 (file 49, block 1)
Reread (file 49, block 1) found same corrupt data (no logical check)
Hex dump of (file 53, block 1) in trace file /u01/app/oracle/diag/rdbms/db01/db01/trace/db01_ora_21376.trc
Corrupt block relative dba: 0x0d400001 (file 53, block 1)
Bad header found during kcvxfh v8
Data in bad block:
 type: 0 format: 2 rdba: 0x0d400001
 last change scn: 0x0000.00000000 seq: 0x1 flg: 0x05
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x00000001
 check value in block header: 0xce7
 computed block checksum: 0x0

Database duplication completed successfully and opened standby database for READONLY to confirm.

SQL:PHYSICAL STANDBY> select open_mode, to_char(current_scn) from v$database;

OPEN_MODE            TO_CHAR(CURRENT_SCN)
-------------------- ----------------------------------------
READ ONLY            10549710065445

SQL:PHYSICAL STANDBY> conn / as sysdba
Connected.
SQL:PRIMARY> select open_mode, to_char(current_scn) from v$database;

OPEN_MODE            TO_CHAR(CURRENT_SCN)
-------------------- ----------------------------------------
READ WRITE           10549710142051

SQL:PRIMARY> @stby_status

Session altered.

DT                   DEST_ID DB_UNIQUE_N STATUS    DATABASE_MODE   RECOVERY_MODE ARCHIVED   APPLIED    APPLIED_LAG GAP_STATUS
-------------------- ------- ----------- --------- --------------- ------------- ---------- ---------- ----------- ----------
07-DEC-2012 13:22:37 2       stby_db01   VALID     OPEN_READ-ONLY  IDLE          162482     162481     1           NO GAP

SQL:PRIMARY>

Environment: Oracle 11.2.0.3 64-bit on Solaris 10 SPARC64.

Blog at WordPress.com.