Thursday, November 28, 2013

ASM new features in oracle 11g


ASM was first time introduced in Oracle Database 10g. Since then lot of improvement has been done. ASM path are not part of regular OS level I/O. Because of this RDBMS does a raw I/O which will be faster as compared to file system I/O. Since there is no intervention OS


sql commands to view the asm information:


select GROUP_NUMBER,NAME,STATE,TOTAL_MB,FREE_MB from gv$asm_diskgroup;

set lines 128 pages 200
col path for a60
col header_status for a20
col name for a20
select GROUP_NUMBER,name,DISK_NUMBER,HEADER_STATUS,PATH,TOTAL_MB,FREE_MB from gv$asm_disk;

select GROUP_NUMBER,name,DISK_NUMBER,HEADER_STATUS,path,MOUNT_STATUS from v$asm_disk;

MOUNT_STATUS:
              1) MISSSING : disk there in asm metadata,but not in storage
               2) CLOSED   : disk is there in storage,but not being used by ASM
               3) OPENED   : normal state
               4) CACHED   : normal state
               5) IGNORED  : Disk header is no longer valid.
               6) CLOSING  : ASM is closing the diskgroup.

HEADER_STATUS:
               1) CANDIDATE : It is not part of any diskgroup, like brand new and ready to use/add.
               2) MEMBER       : Currently is part of disk group, we can't add to another diskgroup
               3) FORMER       : It WAS part of the diskgroup,but now it is ready to be used/add.
               4) PROVISIONED : similar to CANDIDATE.


The PROVISIONED header status is different from the CANDIDATE header status in that PROVISIONED implies that an additional platform-specific action has been taken by an administrator to make the disk available for Automatic Storage Management.
For example, on Windows, the administrator used asmtool or asmtoolg to stamp the disk with a header, or on Linux, the administrator used ASMLib to prepare the disk for ASM.




Few more ASM related commands:



alter diskgroup PROD_ADFG_TEMP dismount;  -- dismount diskgroup

ALTER DISKGROUP PROD_ADFG_TEMP DROP DISK 'PROD_ADFG_TEMP_0000','PROD_ADFG_TEMP_0003' REBALANCE POWER 1; -- to drop disk from diskgroup

drop diskgroup including contents;   -- to drop diskgroup

alter diskgroup PROD_DATA add disk '/dev/oracle/raw36' REBALANCE POWER 3; -- to add disk to diskgroup

alter diskgroup PROD_DATA rebalance power 2 NOWAIT; - to change the rebalance power


select sysdate, GROUP_NUMBER, OPERATION,STATE, POWER, ACTUAL, SOFAR, EST_WORK,EST_RATE, EST_MINUTES from v$asm_operation;  - to check the asm operations like rebalancing

select instname, dbname, group_number,failgroup,disk_number,reads,writes from v$asm_disk_iostat order by 1,2,3,4,5,6;  -- to view the asm diskgroup I/O.



Here I am going to share new feature when you use ASM in Oracle Database 11g environment.


--------------------------------------------------------------------------------------
The default parameter settings work perfectly for ASM. The only parameters needed for 11g ASM:
ASM_DISKSTRING
ASM_DISKGROUPS
INSTANCE_TYPE


ASM Preferred Mirror Read
----------------------------------

Mirroring is done to ensure the protection from data loss. ASM failure groups in Oracle Database 10g always reads the primary copy of a mirrored ASM data extent. If a node is allowed to read the data from local or from closest node inspite that data is secondary, then it can improve the performance by reducing network traffic. Oracle Database 11g can do this by configuring preferred mirror read using the new ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter to specify a list of preferred mirror read names.
ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEA On first instance
ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEB On second instance
This can be monitored from v$asm_disk(preferred_read) and v$asm_disk_iostat tables.


ASM Scalability and Performance Enhancements
---------------------------------- ---------------------
ASM uses a predetermined number of ASM data extents of each size. As soon as a file crosses a certain threshold, the next ASM data extent size is used. An ASM file can begin with 1 AU(Allocation Unit); as the file’s size increases, the ASM data extent size also increases to 8 or 64 AUs based on predefined file size thresholds.
ASM also automatically defragments during allocation if the desired size is unavailable, thereby potentially affecting allocation times but offering much faster file opens, given the reduction in the memory required to store file ASM data extents.

SYSASM privilege
----------------------------------
In Oracle Database 11g, if you log in to an ASM instance as SYSDBA, warnings are written in the corresponding alert.log file.
SYSASM privilege is replacing SYSDBA privilege as compared to proir version.
SYSDBA privilege in ASM instance will be having restricted privileges like mount/unmount diskgroups. Can't be shutdown/startup ASM instance.

ASM Disk Group Compatibility
----------------------------------
An ASM instance can support different RDBMS clients running at different compatibility settings. The database compatible version setting of each instance must be greater than or equal to the RDBMS compatibility of all disk groups used by that database. Database instances are typically run from a different Oracle home than the ASM instance. This implies that the database instance may be running a different software version than the ASM instance. When a database instance first connects to an ASM instance, it negotiates the highest version that they both can support.
Main Objective behind this new feature is to ensure heterogenous environment should also supproted by ASM.

alter diskgroup /*ASMCMD AMBR*/DATA set attribute 'COMPATIBLE.ADVM' = '11.2';

Few New Maintenance command in ASM
---------------------------------- ------------
ALTER DISKGROUP DATA CHECK;
Checks all the metadata directories by default

ALTER DISKGROUP data MOUNT RESTRICT; 
When a disk group is mounted in restricted mode, clients cannot access the files in a disk group.
In a RAC environment, a disk group mounted in RESTRICTED mode can only be accessed by a single instance.


Using RESTRICTED mode improves the performance of rebalance operations
in a RAC environment as it elimitates the need for lock and unlock extent map messaging that occurs
 between ASM instances. Once the rebalance operation is complete,
 the disk group should be dismounted then mounted in NORMAL mode (the default).



ALTER DISKGROUP data MOUNT FORCE; 
Mounts the disk group even if some disks belonging to the disk group are not accessible

DROP DISKGROUP data FORCE INCLUDING CONTENTS; 
Enables users to drop a disk group that cannot be mounted; fails if the disk group is mounted anywhere


ASMCMD is now more powerful in 11g ASM
---------------------------------------------------
The lsdsk command lists ASM disk information. This command can run in two modes:
• Connected mode: ASMCMD uses the V$ and GV$ views to retrieve disk information.
• Nonconnected mode: ASMCMD scans disk headers to retrieve disk information, using an ASM disk string to restrict the discovery set. The connected mode is always attempted first.
MD_BACKUP command is used to do backup of metadata from ASM disk group.
MD_RESTORE command is used to restore the lost metadata to ASM disk group.

ASMCMD> md_backup –b jfv_backup_file -g data
Disk group to be backed up: DATA#
Current alias directory path: jfv
ASMCMD>
ASMCMD> md_restore -b jfv_backup_file -t full -g data
Disk group to be restored: DATA#
ASMCMDAMBR-09358, Option -t newdg specified without any override options.
Current Diskgroup being restored: DATA
Diskgroup DATA created!
User Alias directory +DATA/jfv
created!

ASMCMD now support cp command 
--------------------------------------------------

by which we can copy the ASM files from local to remote node or vice versa and from file system to ASM disk or vice versa.

cp +DATA/ORCL/DATAFILE/TBSJFV.256.629730771 +DATA/ORCL/tbsjfv.bak
cp /home/oracle/tbsjfv.dbf +data/jfv
cp +DATA/orcl/datafile/tbsjfv.256.629989893 \sys@edcdr12p1.+ASM2:+D2/jfv/tbsjfv.dbf

Rename the ASM diskgroup:
--------------------------

Using renamedg asmcmd command we can also rename the ASM disk group.

asmcmd> renamedg –h




copying files from ASM to non-asm( regular file system)


Here I am going to share the possible methods of copy DB related files (datafiles,backup pieces, Archived log files) from ASM storage to regular file system storage(non-ASM).


Many cases you might be required to copy ths files from ASM diskgroup to normal file system.Like,If your primary DB is using ASM and standby is on normal file system, some times you may need to manually copy the archived log fies to standby and register them. Or you may be migrating your daabase from ASM to non-ASM and vice versa.


Method1:(From 11g)



From 11g the asmcmd cp command allows files to be copied between ASM and local or remote destinations. The summary usage is shown below.


cp [-ifr] <[\@connect_identifier:]src> <[\@connect_identifier:]tgt>

Example:



ASMCMD> ls -lt
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   DEC 31 13:00:00  Y     userdata01.256.739191187 
ASMCMD> cp   userdata01.256.739191187  /u01/datafiles/userdata01.dbf
copying +dg1/testdb/datafile/ userdata01.256.739191187  -> /u01/datafiles/userdata01.dbf
ASMCMD>
[oracle]$ ls -lrt
total 525208
-rw-r—– 1 oracle oinstall 314580992 Dec 31 13:17  userdata01.dbf 

Method2:

Using copy command:



SQL> select file_name,status from dba_data_files where file_id=8;
FILE_NAME                                          STATUS
————————————————– ———
+DG1/testdb/datafile/ts1.256.739204397             AVAILABLE

[oracle@crcpdb01 datafiles]$ rman target / nocatalog
Recovery Manager: Release 11.2.0.1.0 – Production on Fri Dec 31 14:34:15 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TESTDB (DBID=2521935115)
using target database control file instead of recovery catalog

RMAN> copy datafile 8 to ‘/u01/datafiles/ts1.dbf’;
Starting backup at 31-DEC-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=31 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00008 name=+DG1/testdb/datafile/ts1.256.739204397
output file name=/u01/datafiles/ts1.dbf tag=TAG20101231T143435 RECID=7 STAMP=739204478
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 31-DEC-10
RMAN>
[oracle@dhcppc1 datafiles]$ ls -lrt
total 525208
-rw-r—– 1 oracle dba      314580992 Dec 31 14:40 ts1.dbf


To copy the archived log files from ASM to non-ASM

Take RMAN backup of required archive log files to temp location


run {
allocate channel ch1 type disk format '/tmp/arc_%U';
backup archivelog from logseq 250 until logseq 301 thread 1
}

cd /tmp

Then, one can restore archive logs that are already located on a filesystem using restore command:

run {
set archivelog destination to '/tmp';
restore archivelog all;
}



Method3: Using  DBMS_FILE_TRANSFER  package:


Create DB dirtectories and connect DB instance


SQL> create directory ASM_DIR as ‘+DG1/testdb/datafile/’;
Directory created.
SQL> create directory TARGET_DIR as ‘/u01/datafiles/’;
Directory created.



SQL> BEGIN
dbms_file_transfer.copy_file(source_directory_object =>
‘ASM_DIR’, source_file_name => ‘ts1.256.739191187′,
destination_directory_object => ‘TARGET_DIR’,
destination_file_name => ‘ts1.dbf’);
END;
/    


SQL> !ls -lrt /u01/datafiles/
total 307512
-rw-r—– 1 oracle dba 314580992 Dec 31 13:12 ts1.dbf


or 

exec DBMS_FILE_TRANSFER.COPY_FILE('ASM_DIR','thread_1_seq_132551.4311.811375563','TARGET_DIR','w071u_1_132551_490438739.dbf');






Wednesday, November 27, 2013

Parallel backup using RMAN in RAC environment


This RMAN feature enables to use multiple nodes parallelly to backup the DB. This will balance the load equally on multiple nodes as well as improves the performance of the backup.

For configuring the number of parallel channel, we can use following command


RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 4;

After allocating the channels, if we execute RMAN backup script, we’ll see channel allocation like


allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=104 instance=inst1 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=66 instance=inst1 devtype=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: sid=94 instance=inst1 devtype=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: sid=40 instance=inst1 devtype=DISK

By default, channels will be allocated from one node. To split the load across all the nodes in the cluster, individual channels need special configuration.
So in two-node cluster, we can equally distribute the load as


CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT ‘sys/passwd@inst1′;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT ‘sys/passwd@inst1′;
CONFIGURE CHANNEL 3 DEVICE TYPE DISK CONNECT ‘sys/passwd@inst2′;
CONFIGURE CHANNEL 4 DEVICE TYPE DISK CONNECT ‘sys/passwd@inst2′;

For validating the configuration,


RMAN> show all;
RMAN configuration parameters are:
CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT ‘*’;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT ‘*’;
CONFIGURE CHANNEL 3 DEVICE TYPE DISK CONNECT ‘*’;
CONFIGURE CHANNEL 4 DEVICE TYPE DISK CONNECT ‘*’;

Username & passwords will not be shown.
Now if we again, execute same RMAN script, we should see channel allocation like

allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=104 instance=inst1 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=66 instance=inst1 devtype=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: sid=94 instance=inst2 devtype=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: sid=40 instance=inst2 devtype=DISK

This approach can be used to reduce the backup time.
Caution: Compressed Backup is CPU bound, so number of channels should be carefully used.
If we reduce the degree of parallelism, let’s say from 4 to 2


RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 2;

We’ll have 2 redundant channels. In RMAN setting, they will be shown as ignored


configuration for DISK channel 3 is ignored
configuration for DISK channel 4 is ignored


To remove these extra channels, we can use


RMAN> CONFIGURE CHANNEL 3 DEVICE TYPE DISK CLEAR;
RMAN> CONFIGURE CHANNEL 4 DEVICE TYPE DISK CLEAR;




you can also allocate multiple channels in RMAN run block




connect target
run {
allocate CHANNEL 1 DEVICE TYPE disk CONNECT 'sys/bond007##@db279p1';
allocate CHANNEL 2 DEVICE TYPE disk CONNECT 'sys/bond007##@db279p1';
allocate CHANNEL 3 DEVICE TYPE disk CONNECT 'sys/bond007##@db279p2';
allocate CHANNEL 4 DEVICE TYPE disk CONNECT 'sys/bond007##@db279p2';
allocate CHANNEL 5 DEVICE TYPE disk CONNECT 'sys/bond007##@db279p3';
allocate CHANNEL 6 DEVICE TYPE disk CONNECT 'sys/bond007##@db279p3';
BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 0 DATABASE FORMAT '/tempe_wdm_migrate/w279_backup/df_t%t_s%s_p%p_%U' TAG=db279p_LEVEL0_BACKUP;
release channel 1;
release channel 2;
release channel 3;
release channel 4;
release channel 5;
release channel 6;
}

Sunday, November 24, 2013

Rolling forward a standby database using RMAN incremental backup

 Rolling forward a standby database using RMAN incremental backup when primary and standby are in ASM filesystem [ID 836986.1]


This can be used when we lost archived log files to be applied in standby or there is a huge gap between primary & Standby.



STEPS:



1) On the standby database, stop the managed recovery process (MRP)
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;


2) On the
STANDBY DATABASE, find the SCN which will be used for the incremental backup at the primary database:
SQL> SELECT CURRENT_SCN FROM V$DATABASE;

SQL> select min(fhscn) from x$kcvfh;

In ideal situation the above 2 queries will return the almost same SCN. However if there is huge difference its better to take backup using the SCN from second query (lesser SCN), as one of the datafile may be behind.

CURRENT_SCN
---------------------
3162298


3) In RMAN, connect to the 
PRIMARY database and create an incremental backup from the SCN derived in the previous step:
RMAN> run {
allocate channel c1 device type disk;
BACKUP INCREMENTAL FROM SCN 8435060423152 DATABASE FORMAT '/dbaworkspace/roll_fwd/ForStandby_%U' tag 'FORSTANDBY';
release channel c1;
}

4) Transfer all backup sets created on the primary system to the standby system.
scp /tmp/ForStandby_* standby:/tmp

RMAN> CATALOG START WITH '/tmp/ForStandby';

using target database control file instead of recovery catalog
searching for all files that match the pattern /tmp/ForStandby

List of Files Unknown to the Database
=====================================
File Name: /tmp/ForStandby_2lkglss4_1_1
File Name: /tmp/ForStandby_2mkglst8_1_1

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/ForStandby_2lkglss4_1_1
File Name: /tmp/ForStandby_2mkglst8_1_1

5) Recover the STANDBY database with the cataloged incremental backup:

RMAN> RECOVER DATABASE NOREDO;

starting recover at 03-JUN-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=28 devtype=DISK
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: +DATA/mystd/datafile/system.297.688213333
destination for restore of datafile 00002: +DATA/mystd/datafile/undotbs1.268.688213335
destination for restore of datafile 00003: +DATA/mystd/datafile/sysaux.267.688213333
channel ORA_DISK_1: reading from backup piece /tmp/ForStandby_2lkglss4_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/tmp/ForStandby_2lkglss4_1_1 tag=FORSTANDBY
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished recover at 03-JUN-09

6) In RMAN, connect to the
PRIMARY database and create a standby control file backup:
RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/tmp/ForStandbyCTRL.bck';
7) Copy the standby control file backup to the STANDBY system. 
scp /tmp/ForStandbyCTRL.bck standby:/tmp

8) We now need to refresh the standby controlfile from primary controlfile (for standby) backup. However, since the datafile names are likely different than primary, let's save the name of datafiles on standby first, which we can refer after restoring controlfile from primary backup to verify if any discrepancy. So, run below query from Standby and save results for further use.
set lines 200
col name format a60
select file#, name from v$datafile order by file# ;

9) From RMAN, connect to
STANDBY database and restore the standby control file:
RMAN> SHUTDOWN;
RMAN> STARTUP NOMOUNT;
RMAN> RESTORE STANDBY CONTROLFILE FROM '/tmp/ForStandbyCTRL.bck';

Starting restore at 03-JUN-09
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=36 devtype=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
output filename=+DATA/mystd/controlfile/current.257.688583989
Finished restore at 03-JUN-09

10) Shut down the STANDBY database and startup mount:
RMAN> SHUTDOWN;
RMAN> STARTUP MOUNT;

11) Since the controlfile is restored from PRIMARY the datafile locations in STANDBY controlfile will be same as PRIMARY database, so catalog datafiles in STANDBY will do the necessary rename operations.
Perform the below step  in STANDBY for each diskgroup where the datafile directory structure between primary and standby are different.
RMAN> CATALOG START WITH '+DATA/mystd/datafile/';

List of Files Unknown to the Database
=====================================
File Name: +data/mystd/DATAFILE/SYSTEM.309.685535773
File Name: +data/mystd/DATAFILE/SYSAUX.301.685535773
File Name: +data/mystd/DATAFILE/UNDOTBS1.302.685535775
File Name: +data/mystd/DATAFILE/SYSTEM.297.688213333
File Name: +data/mystd/DATAFILE/SYSAUX.267.688213333
File Name: +data/mystd/DATAFILE/UNDOTBS1.268.688213335

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================



File Name: +data/mystd/DATAFILE/SYSTEM.297.688213333
File Name: +data/mystd/DATAFILE/SYSAUX.267.688213333
File Name: +data/mystd/DATAFILE/UNDOTBS1.268.688213335
If any datafiles have been added to Primary AFTER scn 3162298 they will also have to be restored to the standby host and cataloged as shown above before doing the switch.
 To determine if any files have been added to Primary since the standby current scn:

SQL>SELECT FILE#, NAME FROM V$DATAFILE WHERE CREATION_CHANGE# > 3162298

RMAN> SWITCH DATABASE TO COPY;

datafile 1 switched to datafile copy "+DATA/mystd/datafile/system.297.688213333"
datafile 2 switched to datafile copy "+DATA/mystd/datafile/undotbs1.268.688213335"
datafile 3 switched to datafile copy "+DATA/mystd/datafile/sysaux.267.688213333"

Note:
At this point, you can compare the query output from step 8) for any discrepancy (other than newly added datafiles)  by running the same query as in Step 8) to ensure we have all the datafiles added in standby.


12) If the
STANDBY database needs to be configured for FLASHBACK use the below step to enable.

SQL> ALTER DATABASE FLASHBACK OFF; 
SQL> ALTER DATABASE FLASHBACK ON;

13) On standby database, clear all standby redo log groups:
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 1;
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 2;
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;
....


14) On the STANDBY database, start the MRP
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
  



DBV utility

DBVERIFY is an external command-line utility that performs a physical data structure integrity check. It can be used on offline or online databases, as well on backup files. You use DBVERIFY primarily when you need to ensure that a backup database (or datafile) is valid before it is restored, or as a diagnostic aid when you have encountered data corruption problems.

Because DBVERIFY can be run against an offline database, integrity checks are significantly faster.

DBVERIFY checks are limited to cache-managed blocks (that is, data blocks). Because DBVERIFY is only for use with datafiles, it will not work against control files or redo logs.



Note:
If the file you are verifying is an Automatic Storage Management (ASM) file, you must supply a USERID. This is because DBVERIFY needs to connect to an Oracle instance to access ASM files.



The following is a sample verification of the file t_db1.dbf.The feedback parameter has been given the value 100 to display one period (.) for every 100 pages processed. A portion of the resulting output is also shown.

% dbv FILE=t_db1.dbf FEEDBACK=100
.
.
.
DBVERIFY - Verification starting : FILE = t_db1.dbf 

................................................................................
 

DBVERIFY - Verification complete 
 
Total Pages Examined         : 9216 
Total Pages Processed (Data) : 2044 
Total Pages Failing   (Data) : 0 
Total Pages Processed (Index): 733 
Total Pages Failing   (Index): 0 
Total Pages Empty            : 5686 
Total Pages Marked Corrupt   : 0 

Total Pages Influx           : 0 
Notes:
  • Pages = Blocks
  • Total Pages Examined = number of blocks in the file
  • Total Pages Processed = number of blocks that were verified (formatted blocks)
  • Total Pages Failing (Data) = number of blocks that failed the data block checking routine
  • Total Pages Failing (Index) = number of blocks that failed the index block checking routine
  • Total Pages Marked Corrupt = number of blocks for which the cache header is invalid, thereby making it impossible for DBVERIFY to identify the block type
  • Total Pages Influx = number of blocks that are being read and written to at the same time. If the database is open when DBVERIFY is run, DBVERIFY reads blocks multiple times to get a consistent image. But because the database is open, there may be blocks that are being read and written to at the same time (INFLUX). DBVERIFY cannot get a consistent image of pages that are in flux.

    Using DBVERIFY to Validate a Segment

    Command-Line Interface

    The following example shows a sample use of the command-line interface to this mode of DBVERIFY.
    dbv USERID=username/password SEGMENT_ID=tsn.segfile.segblock
    
    
    For example, you could enter the following (assuming user hr had SYSDBA privileges):
    dbv USERID=hr/hr SEGMENT_ID=1.2.67
    
    
    
    
    

    Using DBVERIFY to Validate a backup file(cold backup)

    dbv file=data01.dbf feedback=100