Monday, December 15, 2014

RMAN-06183: datafile or datafile copy xxxx_data_01.dbf (file number 9) larger than MAXSETSIZE

RMAN-06183: datafile or datafile copy prod_data_01.dbf (file number 9) larger than MAXSETSIZE

Error:

Starting backup at 15-DEC-14
released channel: dev1
released channel: dev2
released channel: dev3
released channel: dev4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 12/15/2014 21:18:02
RMAN-06183: datafile or datafile copy prod_data_01.dbf (file number 9) larger than MAXSETSIZE


Action:

1) Verify the RMAN backup script and check the backup setsize set properly.
RMAN backup setsize should be more than the max(size) of datafiles in the database.

run { 
allocate channel dev1 type 'SBT_TAPE';
allocate channel dev2 type 'SBT_TAPE';
allocate channel dev3 type 'SBT_TAPE';
allocate channel dev4 type 'SBT_TAPE';
sql 'ALTER SYSTEM SWITCH LOGFILE';
backup 
        full
        filesperset=30   
        setsize=20000000    ( make sure this value is more than the max datafile size in db).
        format='fulldata_%d_%s_%p_%t' 
        database ;
}



2) RMAN> show all;


CONFIGURE MAXSETSIZE TO 30720 M;

Here we go so we have the max file size limit of 30g where as my file size is around 32 g.
 So to fiz this...

RMAN> CONFIGURE MAXSETSIZE clear;





ORA-27211: Failed to load Media Management Library

ORA-27211: Failed to load Media Management Library


Error description:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of allocate command on dev1 channel at 12/15/2014 21:14:14
ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27211: Failed to load Media Management Library
Additional information: 2

Action:

Make sure media management libraries are installed and the library (libobk.so) is linked under $ORACLE_HOME/lib  to the netbackup library location. (/usr/openv/netbackup/bin/libobk.so64).


Tuesday, October 7, 2014

debug session in SQL Developer

Problem:

Getting the following error when debugging a function under GENEVA_DATA user in hostname:

ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_DEBUG_JDWP", line 68
ORA-06512: at line 1


Solution:

How to start a debug session in SQL Developer?
There are some prerequisites, it won’t take but a few moments to setup.

Ensure that
  • you have been granted the DEBUG CONNECT SESSION priv
  • you have EXECUTE privs on DBMS_JDWP_DEBUG.
  • you have EXECUTE privs on the object you want to debug
 

Wednesday, May 21, 2014

Interview questions

First round
  • What will u do when the switchover hungs?
  • hw do u perform the switchover? what are the prechecks u do ?
  • how to setup the DG in max protection mode?
  • How many standby redo logs required?
  • Can we create standy redo log files with diff size? if create what is impact?
  • While adding datafile forgot to specify '+' symbol. what happens to primay & standby? hw do you resolve ?
  • Due to some reason had to perform failover. How to do reconfigure dataguard for the failed (old) primary?
  • Had to perform the flashback db on primary. What happens to the standby? Will it also flashed back?
  • Difference between logical block corruption and physical corruption?
  • How do you calculate the no of channels that we can configure (limitations) ?  In one server 10 DB are there, out of those 2 are critical, how do you allocate channels?
  • 11gR2 what is the new feature implemented in collecting statistics 
  • What are the histograms ? use and Types of hostograms.
  • One query is runnin fine in dev, same query is taking long time in Prod. How to you analyze? what colud be the reasons
  • sql profiles
  • AWR report
  • ASSM ( Automatic segment space mgmt) ? 
  •  Have u ever used memory advisor?
  • How to find the which indx is eligible for rebuild? 
  •  Rolling patch. patch failed on second node in rolling fashion. what u ll do ?\
  • Import is hung(Impdp), hw do u monitor? wht memory component is required for Impdp? how do find the optimal value for the large_pool_size?
  • how do u monitor server load ? which tool ?  ( OS watcher)\
  • hw to reduce(tune) the log file sync wait event? 
  • What is nodeapps?
  • What Oracle has introduced in 11g R2 at binary level to detect the VD & OCR when ASM is not up?   ( KFED, KFOD)
  • cache fushion. which b/g process is responsible?
  • diff b/w 10g & 11g RAC..  
  • hw the cluster starts in 11gR2 (steps)
  • what s node eviction? diff b/w 10g & 11g (node eviction).
  • when node  evicts in RAC?
  • In 11gR2, when the node(server) will be reboot?
  • if I kill ossd deamon what happens?
  • I f I kill crsd deamon what happens to the cluster?
  • If we create Voting disk on external redudancy DG, how many copies of VDisks will be created?


second round

  • How do you investigate the node eviction? which are all the logs you request to check?
  • Trying to start the cluster, but nothing is coming up. (ps -ef|grep ora) no process is found. How do you investigate?
  • How do you find the location of your OCR file without ocrconfig (when is cluster is down)?
  • What is default location of the OLR?
  • How do you find the location of the OLR? 
  • How do you find the inventory of the oracle for what are all the binaries got installed on the server?
  • What is the cssmisscount?
  • Got an alert for RMAN backup failed due to insufficient space on the tape. What will you do?
  • What is main difference in the node eviction concept from 10g to 11.2.0.4?
  • How do you find the primate IP(Interconnect) for cluster? ( apart from the /etc/hosts)
  • In RAC, how do you apply the opatch for oracle grid?
  • To apply patch do we need to stop the cluster?
  • When you(root)  unlock the grid what happens? will is stops the cluster? How to lock it back?
  • What file is required to start the ohsd.bin deamon (high avilability stack) ?
  • How do find at what time the node got evicted? ( apart from "uptime cmd in lower than 11g version).
  • In AWR report, found "log file sync" wait event in top. how do you reduce(resolve)?
  • Will lgwr (b/g) invoke for commit?
  • Will lgwr invoke for every checkpoint?
  • what is dynamic remastering? which is responsible for dynamic remastering?
  • Node1 is master node. I am trying to take ocr backup manually using ocrconfig. Where the does the backup created?
 ----------------------------------------------------------------------------------------------------
 first round

  • Breifly explain about your past experience & what kind of environement that you have worked? 
  • What are the new features of ORACLE 11gR2?
  • What is main difference between the 10g & 11g cluster?
  • How do you take backup of Voting Disk and OCR files?
  • What is for voting disk and which background process is responsible for write the info to Voting Disk?
  • What does OCR contains?
  • Explain about SCAN.
  • Have a RMAN full backup on sunday & incremental backup on monday 10:00 PM. My Database crashed at 11:30 PM on Monday. How do you recover?
Second round 

  •  Explain about your past experience as oracle DBA? What kind of setup you have & what platform?
  • High level steps to install the RAC ( 2 node).
  • How do you perform upgrade (high level steps like pre-checks & post-upgrade steps).
  • How do you check the version of the database and what are all the critical patchsets have been applied? 
  • During the critical patching, at what step oracle will update the patch info in the database (in v$registry) ? 
  • What are the critical scenario's you have faced in the dataguard env? 
  • You got an alert for database locks. What is your action? 
  • What is best way to install oracle binaries in hundrards of servers? 
  • What is the use of local_listener & remote_listener ? 
  • what is the purpose of the virtual IP (VIP) in RAC? 
  • Some of the blocks corrupted in one of datafile. How do you recover the block corruption?
  • What is IO fencing?
  • How do you find which is master node? What are the responsibilties of master node?





Wednesday, May 7, 2014

oracle 11gR2 Grid infrastructure (RAC)


Starting with Oracle Database 11g Release 2, Oracle Clusterware and Oracle ASM are installed into a single home directory, which is called the Grid home. Oracle Grid Infrastructure for a cluster refers to the installation of the combined products. Oracle Clusterware and Oracle ASM are still individual products, and are referred to by those names.

About ASM: Oracle ASM is an integrated, high-performance volume manager and file system. With Oracle Database 11g Release 2, Oracle ASM adds support for storing the Oracle Clusterware OCR and voting disk files, also a general purpose cluster file system called Oracle Automatic Storage Management Cluster File System (Oracle ACFS).

Oracle ASM is based on the principle that the database should manage storage instead of requiring an administrator to do it. Oracle ASM eliminates the need for you to directly manage potentially thousands of Oracle database files. Oracle ASM groups the disks in your storage system into one or more disk groups. You manage a small set of disk groups and Oracle ASM automates the placement of the database files within those disk groups.

 Oracle Cluster Registry (OCR) and voting disks can also be placed on Oracle ASM diskgroups. When using Oracle Real Application Clusters (Oracle RAC), each instance must have access to the data files and recovery files for the Oracle RAC database. Using Oracle Automatic Storage Management (Oracle ASM) is an easy way to satisfy this requirement
Striping—Oracle ASM spreads data evenly across all disks in a disk group to optimize performance and utilization. This even distribution of database files eliminates the need for regular monitoring and I/O performance tuning.
Mirroring—Oracle ASM increases data availability by optionally mirroring any file. Oracle ASM mirrors at the file level, unlike operating system mirroring, which mirrors at the disk level. Mirroring means keeping redundant copies, or mirrored copies, of each extent of the file, to help avoid data loss caused by disk failures. The mirrored copy of each file extent is always kept on a different disk from the original copy. If a disk fails, then Oracle ASM continues to access affected files by accessing mirrored copies on the surviving disks in the disk group.

Online storage reconfiguration and dynamic rebalancing When you add a disk to a disk group, Oracle ASM automatically redistributes the data so that it is evenly spread across all disks in the disk group, including the new disk. The process of redistributing data so that it is also spread across the newly added disks is known as rebalancing. It is done in the background and with minimal impact to database performance Managed file creation and deletion Oracle ASM automatically assigns file names when files are created, and automatically deletes files when they are no longer needed by the database

ASM Instance:
Oracle ASM is implemented as a special kind of Oracle instance, with its own System Global Area and background processes. The Oracle ASM instance is tightly integrated with Oracle Clusterware and Oracle Database. Every server running one or more database instances that use Oracle ASM for storage has an Oracle ASM instance. In an Oracle RAC environment, there is one Oracle ASM instance for each node, and the Oracle ASM instances communicate with each other on a peer-to-peer basis. Only one Oracle ASM instance is supported on a node, but you can have multiple database instances that use Oracle ASM residing on the same node RAC: Oracle RAC extends Oracle Database so that you can store, update, and efficiently retrieve data using multiple database instances on different servers at the same time.

Oracle RAC provides the software that manages multiple servers and instances as a single group. The data files that comprise the database must reside on shared storage that is accessible from all servers that are part of the cluster. Each server in the cluster runs the Oracle RAC software. An Oracle Database database has a one-to-one relationship between data files and the database instance. An Oracle RAC database, however, has a one-to-many relationship between data files and database instances. In an Oracle RAC database, multiple instances access a single set of database files.

Cache Fusion: Oracle RAC uses Cache Fusion to synchronize the data stored in the buffer cache of each database instance. Cache Fusion moves current data blocks (which reside in memory) between database instances, rather than having one database instance write the data blocks to disk and requiring another database instance to reread the data blocks from disk. When a data block located in the buffer cache of one instance is required by another instance, Cache Fusion transfers the data block directly between the instances using the interconnect, enabling the Oracle RAC database to access and modify data as if the data resided in a single buffer cache. Globla Enque process (GES & GCS) is the service which is responsible for the cache fusion.
 

Oracle RAC One Node: Oracle Real Application Clusters One Node (Oracle RAC One Node) is a single instance of an Oracle RAC database that runs on one node in a cluster. This feature allows you to consolidate many databases into one cluster with minimal overhead, protecting them from both planned and unplanned downtime. The consolidated databases reap the high availability benefits of failover protection, online rolling patch application, and rolling upgrades for the operating system and Oracle Clusterware. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2). Oracle RAC One Node enables better availability than cold failover for single-instance databases because of the Oracle technology called online database relocation, which intelligently migrates database instances and connections to other cluster nodes for high availability and load balancing. Online database relocation is performed using the Server Control Utility (SRVCTL). If you run your applications on Oracle RAC One Node, and your applications grow to the point that a single node cannot supply the resources they need, then Oracle RAC One Node can be upgraded online to Oracle Real Application Clusters. If the node running your Oracle RAC One Node database becomes overloaded, then you can migrate the database instance to another node in the cluster using online database relocation with no downtime for application users.

Tools for managing oracle RAC:
Cluster Verification Utility (CVU)— CVU is a command-line tool that you can use to verify a range of cluster and Oracle RAC components such as shared storage devices, networking configurations, system requirements, and Oracle Clusterware, also operating system groups and users. You can use CVU for preinstallation and postinstallation checks of your cluster environment. CVU is especially useful during preinstallation and during installation of Oracle Clusterware and Oracle RAC components. OUI runs CVU after the Oracle Clusterware installation to verify your environment.

Server Control (SRVCTL)—SRVCTL is a command-line interface that you can use to manage the resources defined in the Oracle Cluster Registry (OCR). They resources include the node applications, called nodeapps, that comprise Oracle Clusterware, which includes the Oracle Notification Service (ONS), the Global Services Daemon (GSD), and the Virtual IP (VIP). Other resources that can be managed by SRVCTL include databases, instances, listeners, services, and applications. Using SRVCTL you can start and stop nodeapps, databases, instances, listeners, and services, delete or move instances and services, add services, and manage configuration information.

Cluster Ready Services Control (CRSCTL)—CRSCTL is a command-line tool that you can use to manage Oracle Clusterware daemons. These daemons include Cluster Synchronization Services (CSS), Cluster-Ready Services (CRS), and Event Manager (EVM). You can use CRSCTL to start and stop Oracle Clusterware and to determine the current status of your Oracle Clusterware installation.

Oracle Automatic Storage Management Command Line utility (ASMCMD)—ASMCMD is a command-line utility that you can use to manage Oracle ASM instances, Oracle ASM disk groups, file access control for disk groups, files and directories within Oracle ASM disk groups, templates for disk groups, and Oracle ASM volumes.

Oracle Clusterware achieves superior scalability and high availability by using the following components:

Voting disk–Manages cluster membership and arbitrates cluster ownership between the nodes in case of network failures. The voting disk is a file that resides on shared storage. For high availability, Oracle recommends that you have multiple voting disks, and that you have an odd number of voting disks. If you define a single voting disk, then use mirroring at the file system level for redundancy.

Oracle Cluster Registry (OCR)–Maintains cluster configuration information and configuration information about any cluster database within the cluster. The OCR contains information such as which database instances run on which nodes and which services run on which databases. The OCR also stores information about processes that Oracle Clusterware controls. The OCR resides on shared storage that is accessible by all the nodes in your cluster. Oracle Clusterware can multiplex, or maintain multiple copies of, the OCR and Oracle recommends that you use this feature to ensure high availability.

In previous releases, to make use of redundant networks for the interconnect, bonding, trunking, teaming, or similar technology was required. Oracle Grid Infrastructure for a cluster and Oracle RAC can now make use of redundant network interconnects, without the use of other network technology, to enhance optimal communication in the cluster. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2). Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter eth0, then you must configure eth0 as the public interface on all nodes. Network interface names are case-sensitive. You should configure the same private interface names for all nodes as well. If eth1 is the private interface name for the first node, then eth1 should be the private interface name for your second node. Network interface names are case-sensitive.

SCAN(Single Client Access Name): During installation a SCAN for the cluster is configured, which is a domain name that resolves to all the SCAN addresses allocated for the cluster. The IP addresses used for the SCAN addresses must be on the same subnet as the VIP addresses. The SCAN must be unique within your network. The SCAN addresses should not respond to ping commands before installation. During installation of the Oracle Grid Infrastructure for a cluster, a listener is created for each of the SCAN addresses. Clients that access the Oracle RAC database should use the SCAN or SCAN address, not the VIP name or address. If an application uses a SCAN to connect to the cluster database, then the network configuration files on the client computer do not have to be modified when nodes are added to or removed from the cluster. The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that form the cluster. Clients can connect to the cluster database using the easy connect naming method and the SCAN. The fully qualified SCAN for the cluster defaults to cluster_name-scan.GNS_ subdomain_name, for example docrac-scan.example.com. The short SCAN for the cluster is docrac-scan. You can use any name for the SCAN, if it is unique within your network and conforms to the RFC 952 stand.

To determine if the operating system requirements for Oracle Linux have been met:
1. To determine which distribution and version of Linux is installed, run the following command at the operating system prompt as the root user: # cat /proc/version.

2. To determine which chip architecture each server is using and which version of the software you should install, run the following command at the operating system prompt as the root user: # uname -m
This command displays the processor type. For a 64-bit architecture, the output would be "x86_64".
Determine your cluster name. The cluster name should satisfy the following conditions:
■ The cluster name is globally unique throughout your host domain.
■ The cluster name is at least one character long and less than 15 characters long.
■ The cluster name must consist of the same character set used for host names: single-byte alphanumeric characters (a to z, A to Z, and 0 to 9) and hyphens (-).
■ If you use third-party vendor clusterware, then Oracle recommends that you use the vendor cluster name.

ASMLib: Using ASMLib to Mark the Shared Disks as Candidate Disks Another option for configuring shared disks is to use the ASMLib utility. If you configure a shared disk to be mounted automatically when the server restarts, then, unless you have configured special files for device persistence, a disk that appeared as /dev/sdg before the system shutdown can appear as /dev/sdh after the system is restarted.
If you use ASMLib to configure the shared disks, then when you restart the node:
■ The disk device names do not change
■ The ownership and group membership for these disk devices remains the same
■ You can copy the disk configuration implemented by Oracle ASM to other nodes in the cluster by running a simple command.

Installing ASMLib: To install the ASMLib software packages:
1. Download the ASMLib packages to each node in your cluster. 

2. Change to the directory where the package files were downloaded.
3. As the root user, use the rpm command to install the packages. For example:
# rpm -Uvh oracleasm-support-2.1.3-1.el4.x86_64.rpm
# rpm -Uvh oracleasmlib-2.0.4-1.el4.x86_64.rpm
# rpm -Uvh oracleasm-2.6.9-55.0.12.ELsmp-2.0.3-1.x86_64.rpm
After you have completed these commands, ASMLib is installed on the system. 4. Repeat steps 2 and 3 on each node in your cluster.

configuring ASMLib:  Now that the ASMLib software is installed, a few steps have to be taken by the system administrator to make the Oracle ASM driver available. The Oracle ASM driver must be loaded, and the driver file system must be mounted. This is taken care of by the initialization script, /usr/sbin/oracleasm.
To configure the ASMLib software after installation:
1. As the root user, run the following command:
# /usr/sbin/oracleasm configure
The script prompts you for the default user and group to own the Oracle ASM driver access point. Specify the Oracle Database software owner (oracle) and the OSDBA group (dba).
Repeat step 1 on each node in your cluster. Using ASMLib to Create Oracle ASM Disks Every disk that is used in an Oracle ASM disk group must be accessible on each node. After you make the physical disk available to each node, you can then mark the disk device as an Oracle ASM disk. The /usr/sbin/oracleasm script is used for this task.

To create Oracle ASM disks using ASMLib:
1. As the root user, use oracleasm to create Oracle ASM disks using the following syntax: # /usr/sbin/oracleasm createdisk disk_name device_partition_name In this command, disk_name is the name you choose for the Oracle ASM disk. The name you choose must contain only ASCII capital letters, numbers, or underscores, and the disk name must start with a letter, for example, DISK1 or VOL1, or RAC_FILE1. The name of the disk partition to mark as an Oracle ASM disk is the device_partition_name. For example: # /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1 If you must unmark a disk that was used in a createdisk command, then you can use the following syntax: # /usr/sbin/oracleasm deletedisk disk_name.
2. Repeat step 1 for each disk that is used by Oracle ASM.
3. After you have created all the Oracle ASM disks for your cluster, use the listdisks command to verify their availability: # /usr/sbin/oracleasm listdisks DISK1 DISK2 DISK3.
4. On all the other nodes in the cluster, use the scandisks command to view the newly created Oracle ASM disks.You do not have to create the Oracle ASM disks on each node, only on one node in the cluster. # /usr/sbin/oracleasm scandisks Scanning system for ASM disks

oracle events 10046,10053 & levels(0,1,4,8,12,16,32,64)


Oracle Events:
Oracle events are assigned a unique number that identifies the event.
There are four different kinds of Oracle event types. These are:
  Process Trace events ? The 10053 and 10046 events are trace events. These trace certain system operations continuously until the tracing is canceled. The results are written to a trace files.
 
  Events that change database behavior  ? These events are designed to modify the behavior of the database. An example is event 10262 that causes the database to stop reporting memory smaller than a defined size.
 
  Immediate ? These events dump diagnostic information. This information is dumped as soon as the event is executed. These includes things like system state dumps and dumps of file headers.
 
On error events -  These events are not unlike immediate events, in that they dump information immediately. This time though, execution of the event is triggered by an error event. The result is a dump file that can be used to discover the cause of the error.
 The 10046 Trace Event
The 10046 event allows you to track, at a very fine level, the execution of a given set of SQL statements. The 10046 event can be set at both the session level (for tracing of a specific session) or at a database level. It can be set either via a parameter file setting, or it can be set dynamically for the database, for your session or even for another session.
 
The 10046 even can also be set to cause the trace output to be generated with differing levels of detail. For example, level 1 tracing is equivalent to the output you get when enabling the SQL_TRACE facility within Oracle (ALTER SESSION SET sql_trace=TRUE).
 
The real bang from the 10046 trace results is in enabling the higher levels of detail. With these higher levels of detail you can see wait related information and bind variable related information. The following table lists the typically used event levels present in the 10046 trace event in Oracle Database 10g:
 

Extended SQL trace (debugging event 10046 at a level higher than 1) is one of the key features provided by Oracle to troubleshoot applications using Oracle Database

For many years the available levels were always the same (4, 8 and 12).
With 11g,  new levels (16 and 32) were introduced.
More recently, with the introduction of the fix for bug 8328200, a new one was added to the list (64).
 
Level Description
0               The debugging event is disabled.
1 The debugging event is enabled. For each processed database call, the following information is given: SQL statement, response time, service time, number of processed rows, number of logical reads, number of physical reads and writes, execution plan, and little additional information.
Up to 10.2 an execution plan is written to the trace file only when the cursor it is associated with is closed. The execution statistics associated to it are values aggregated over all executions.
As of 11.1 an execution plan is written to the trace file only after the first execution of every cursor. The execution statistics associated to it are the ones of the first execution only.
4 As in level 1, with additional information about bind variables. Mainly, the data type, its precision, and the value used for each execution.
8 As in level 1, plus detailed information about wait time. For each wait experienced during the processing, the following information is given: the name of the wait event, the duration, and a few additional parameters identifying the resource that has been waited for.
16 As in level 1, plus the execution plans information is written to the trace file for each execution. Available as of 11.1 only.
32 As in level 1, but without the execution plans information. Available as of 11.1 only.
64 As in level 1, plus the execution plans information might be written for executions following the first one. The condition is that, since the last write of execution plans information, a particular cursor consumed at least one additional minute of DB time. This level is interesting in two cases. First, when the information about the first execution is not enough for analysing a specific issue. Second, when the overhead of writing the information about every execution (level 16) is too high. Generally available as of 11.2.0.2 only.

level 1 -  Basic trace level. Like the standard SQL_TRACE trace file. Provides statistics for parse, execute, fetch, commit and rollback database calls.
level 4 - Displays bind variables
level 8 - Displays wait statistics
level 12 - Display both bind variables  & wait statistics  

In addition to the levels described in the previous table, you can also combine the levels 4 and 8 with every other level greater than 1. For example:
  • Level 12 (4 + 8): simultaneously enable level 4 and level 8.(i.e. Binds info+ waits info)
  • Level 28 (4 + 8 + 16): simultaneously enable level 4, level 8 and level 16.
  • Level 68 (4 + 64): simultaneously enable level 4 and level 64.

here is the mapping between the levels and the parameters:
  • Level 4: waits=FALSE, binds=TRUE, plan_stat=’first_execution’
  • Level 8: waits=TRUE, binds=FALSE, plan_stat=’first_execution’
  • Level 16: waits=FALSE, binds=FALSE, plan_stat=’all_executions’
  • Level 32: waits=FALSE, binds=FALSE, plan_stat=’never’

Wednesday, April 23, 2014

KFED,KFOD,AMDU in oracle11g ASM

KFOD =Kernel Files Osm Disk
KFED =Kernel Files metadata EDitor
AMDU = ASM Metadata Dump Utility


kfed,kfod is available from Oracle Database 11g Release 1 and forward and can be used to read and write ASM metadata, in particular disk headers and ASM metadata contents. kfed in write mode is a powerful but can be potentially destructive tool in the wrong hands. kfed invoked with the -help option displays the online help for the utility.

KFOD confirm that DISK header is working fine or not .
KFOD is used during installation time also (by OUI,DBCA or ASMCA) perform disk discovery


# $GRID_HOME/bin/kfod status=TRUE asm_diskstring=’/dev/raw/raw*’ disk=all dscvgroup=TRUE
——————————————————————————–
Disk Size Header Path Disk Group User Group
================================================================================
1: 512000 Mb MEMBER /dev/raw/raw1 DATA ora11g dba
2: 512000 Mb PROVISIONED /dev/raw/raw2 DATA ora11g dba
3: 512000 Mb MEMBER /dev/raw/raw3 DATA ora11g dba
4: 512000 Mb PROVISIONED /dev/raw/raw4 DATA ora11g dba
5: 512000 Mb MEMBER /dev/raw/raw5 DATA ora11g dba
6: 512000 Mb MEMBER /dev/raw/raw6 DATA ora11g dba
——————————————————————————–
ORACLE_SID ORACLE_HOME
================================================================================
+ASM /oracle/grid11g



Kfed utility is used to read /repair disk :It allows to analyze ASM disk header information when ASM diskgroups can not be mounted.


# $GRID_HOME/bin/kfed read /dev/raw/raw2

kfbh.endian:                          1 ; 0×000: 0×01
kfbh.hard:                          130 ; 0×001: 0×82
kfbh.type:                            1 ; 0×002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0×003: 0×01
kfbh.block.blk:                       0 ; 0×004: blk=0
kfbh.block.obj:              2147483648 ; 0×008: disk=0
kfbh.check:                   945480537 ; 0x00c: 0x385ae359
kfbh.fcn.base:                        0 ; 0×010: 0×00000000
kfbh.fcn.wrap:                        0 ; 0×014: 0×00000000
kfbh.spare1:                          0 ; 0×018: 0×00000000
kfbh.spare2:                          0 ; 0x01c: 0×00000000
kfdhdb.driver.provstr:ORCLDISK ; 0×000: length=32
kfdhdb.driver.reserved[0]:     33686018 ; 0×008: 0×02020202
kfdhdb.driver.reserved[1]:     33686018 ; 0x00c: 0×02020202
kfdhdb.driver.reserved[2]:     33686018 ; 0×010: 0×02020202
kfdhdb.driver.reserved[3]:     33686018 ; 0×014: 0×02020202
kfdhdb.driver.reserved[4]:     33686018 ; 0×018: 0×02020202
kfdhdb.driver.reserved[5]:     33686018 ; 0x01c: 0×02020202
kfdhdb.compat:                186646528 ; 0×020: 0x0b200000
kfdhdb.dsknum:                        0 ; 0×024: 0×0000
kfdhdb.grptyp:                        1 ; 0×026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0×027: KFDHDR_MEMBER
kfdhdb.dskname:               DATA_0000 ; 0×028: length=9
kfdhdb.grpname:                    DATA ; 0×048: length=4
kfdhdb.fgname:                DATA_0000 ; 0×068: length=9
kfdhdb.capname:                         ; 0×088: length=0
kfdhdb.crestmp.hi:             32983952 ; 0x0a8: HOUR=0×10 DAYS=0x1c MNTH=0×2 YEAR=0x7dd
kfdhdb.crestmp.lo:           3911025664 ; 0x0ac: USEC=0×0 MSEC=0×361 SECS=0×11 MINS=0x3a
kfdhdb.mntstmp.hi:             32984464 ; 0x0b0: HOUR=0×10 DAYS=0xc MNTH=0×3 YEAR=0x7dd
kfdhdb.mntstmp.lo:           3973637120 ; 0x0b4: USEC=0×0 MSEC=0×239 SECS=0xd MINS=0x3b
kfdhdb.secsize:                     512 ; 0x0b8: 0×0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0×1000
kfdhdb.ausize:                  1048576 ; 0x0bc: 0×00100000
kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize:                  512000 ; 0x0c4: 0x0007d000
kfdhdb.pmcnt:                         6 ; 0x0c8: 0×00000006
kfdhdb.fstlocn:                       1 ; 0x0cc: 0×00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0×00000002
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0×00000002
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0×0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0×0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0×0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0×0000
kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi:             32983952 ; 0x0e4: HOUR=0×10 DAYS=0x1c MNTH=0×2 YEAR=0x7dd
kfdhdb.grpstmp.lo:           3909777408 ; 0x0e8: USEC=0×0 MSEC=0x29e SECS=0×10 MINS=0x3a
kfdhdb.vfstart:                       0 ; 0x0ec: 0×00000000
kfdhdb.vfend:                         0 ; 0x0f0: 0×00000000
kfdhdb.spfile:                        0 ; 0x0f4: 0×00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0×00000000
kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0×00000000
kfdhdb.ub4spare[1]:                   0 ; 0×100: 0×00000000
kfdhdb.ub4spare[2]:                   0 ; 0×104: 0×00000000
kfdhdb.ub4spare[3]:                   0 ; 0×108: 0×00000000
kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0×00000000
kfdhdb.ub4spare[5]:                   0 ; 0×110: 0×00000000
kfdhdb.ub4spare[6]:                   0 ; 0×114: 0×00000000
kfdhdb.ub4spare[7]:                   0 ; 0×118: 0×00000000
kfdhdb.ub4spare[8]:                   0 ; 0x11c: 0×00000000
kfdhdb.ub4spare[9]:                   0 ; 0×120: 0×00000000
kfdhdb.ub4spare[10]:                  0 ; 0×124: 0×00000000
kfdhdb.ub4spare[11]:                  0 ; 0×128: 0×00000000
kfdhdb.ub4spare[12]:                  0 ; 0x12c: 0×00000000
kfdhdb.ub4spare[13]:                  0 ; 0×130: 0×00000000
kfdhdb.ub4spare[14]:                  0 ; 0×134: 0×00000000
kfdhdb.ub4spare[15]:                  0 ; 0×138: 0×00000000
kfdhdb.ub4spare[16]:                  0 ; 0x13c: 0×00000000
kfdhdb.ub4spare[17]:                  0 ; 0×140: 0×00000000
kfdhdb.ub4spare[18]:                  0 ; 0×144: 0×00000000
kfdhdb.ub4spare[19]:                  0 ; 0×148: 0×00000000
kfdhdb.ub4spare[20]:                  0 ; 0x14c: 0×00000000
kfdhdb.ub4spare[21]:                  0 ; 0×150: 0×00000000
kfdhdb.ub4spare[22]:                  0 ; 0×154: 0×00000000
kfdhdb.ub4spare[23]:                  0 ; 0×158: 0×00000000
kfdhdb.ub4spare[24]:                  0 ; 0x15c: 0×00000000
kfdhdb.ub4spare[25]:                  0 ; 0×160: 0×00000000
kfdhdb.ub4spare[26]:                  0 ; 0×164: 0×00000000
kfdhdb.ub4spare[27]:                  0 ; 0×168: 0×00000000
kfdhdb.ub4spare[28]:                  0 ; 0x16c: 0×00000000
kfdhdb.ub4spare[29]:                  0 ; 0×170: 0×00000000
kfdhdb.ub4spare[30]:                  0 ; 0×174: 0×00000000
kfdhdb.ub4spare[31]:                  0 ; 0×178: 0×00000000
kfdhdb.ub4spare[32]:                  0 ; 0x17c: 0×00000000
kfdhdb.ub4spare[33]:                  0 ; 0×180: 0×00000000
kfdhdb.ub4spare[34]:                  0 ; 0×184: 0×00000000
kfdhdb.ub4spare[35]:                  0 ; 0×188: 0×00000000
kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0×00000000
kfdhdb.ub4spare[37]:                  0 ; 0×190: 0×00000000
kfdhdb.ub4spare[38]:                  0 ; 0×194: 0×00000000
kfdhdb.ub4spare[39]:             254613 ; 0×198: 0x0003e295
kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0×00000000
kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0×00000000
kfdhdb.ub4spare[42]:                  0 ; 0x1a4: 0×00000000
kfdhdb.ub4spare[43]:                  0 ; 0x1a8: 0×00000000
kfdhdb.ub4spare[44]:                  0 ; 0x1ac: 0×00000000
kfdhdb.ub4spare[45]:                  0 ; 0x1b0: 0×00000000
kfdhdb.ub4spare[46]:                  0 ; 0x1b4: 0×00000000
kfdhdb.ub4spare[47]:                  0 ; 0x1b8: 0×00000000
kfdhdb.ub4spare[48]:                  0 ; 0x1bc: 0×00000000
kfdhdb.ub4spare[49]:                  0 ; 0x1c0: 0×00000000
kfdhdb.ub4spare[50]:                  0 ; 0x1c4: 0×00000000
kfdhdb.ub4spare[51]:                  0 ; 0x1c8: 0×00000000
kfdhdb.ub4spare[52]:                  0 ; 0x1cc: 0×00000000
kfdhdb.ub4spare[53]:                  0 ; 0x1d0: 0×00000000
kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0×00000000
kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0×00000000
kfdhdb.acdb.ents:                     0 ; 0x1dc: 0×0000
kfdhdb.acdb.ub2spare:             43605 ; 0x1de: 0xaa55

The header information was corrupted. kfed output for the disk affected shows 0xaa55 on kfdhdb.acdb.ub2spare field. I backed up the header information using dd command.

# dd if=/dev/raw/raw2 of=/tmp/DATA.dd bs=1M count=10

I got the allocation unit size of damaged disk.

# $GRID_HOME/bin/kfed read /dev/raw/raw2 | grep ausize

Now we can repair damaged header information.

# $GRID_HOME/bin/kfed repair /dev/raw/raw2 aus=1048576

I rechecked the header information of damaged disk using kfed utility. Now, kfed output for the disk affected shows 0×0000 on kfdhdb.acdb.ub2spare field

# $GRID_HOME/bin/kfed read /dev/raw/raw2

kfbh.endian:                          1 ; 0×000: 0×01
kfbh.hard:                          130 ; 0×001: 0×82
kfbh.type:                            1 ; 0×002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0×003: 0×01
kfbh.block.blk:                       0 ; 0×004: blk=0
kfbh.block.obj:              2147483648 ; 0×008: disk=0
kfbh.check:                   945480537 ; 0x00c: 0x385ae359
kfbh.fcn.base:                        0 ; 0×010: 0×00000000
kfbh.fcn.wrap:                        0 ; 0×014: 0×00000000
kfbh.spare1:                          0 ; 0×018: 0×00000000
kfbh.spare2:                          0 ; 0x01c: 0×00000000
kfdhdb.driver.provstr:ORCLDISK ; 0×000: length=32
kfdhdb.driver.reserved[0]:     33686018 ; 0×008: 0×02020202
kfdhdb.driver.reserved[1]:     33686018 ; 0x00c: 0×02020202
kfdhdb.driver.reserved[2]:     33686018 ; 0×010: 0×02020202
kfdhdb.driver.reserved[3]:     33686018 ; 0×014: 0×02020202
kfdhdb.driver.reserved[4]:     33686018 ; 0×018: 0×02020202
kfdhdb.driver.reserved[5]:     33686018 ; 0x01c: 0×02020202
kfdhdb.compat:                186646528 ; 0×020: 0x0b200000
kfdhdb.dsknum:                        0 ; 0×024: 0×0000
kfdhdb.grptyp:                        1 ; 0×026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0×027: KFDHDR_MEMBER
kfdhdb.dskname:               DATA_0000 ; 0×028: length=9
kfdhdb.grpname:                    DATA ; 0×048: length=4
kfdhdb.fgname:                DATA_0000 ; 0×068: length=9
kfdhdb.capname:                         ; 0×088: length=0
kfdhdb.crestmp.hi:             32983952 ; 0x0a8: HOUR=0×10 DAYS=0x1c MNTH=0×2 YEAR=0x7dd
kfdhdb.crestmp.lo:           3911025664 ; 0x0ac: USEC=0×0 MSEC=0×361 SECS=0×11 MINS=0x3a
kfdhdb.mntstmp.hi:             32984464 ; 0x0b0: HOUR=0×10 DAYS=0xc MNTH=0×3 YEAR=0x7dd
kfdhdb.mntstmp.lo:           3973637120 ; 0x0b4: USEC=0×0 MSEC=0×239 SECS=0xd MINS=0x3b
kfdhdb.secsize:                     512 ; 0x0b8: 0×0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0×1000
kfdhdb.ausize:                  1048576 ; 0x0bc: 0×00100000
kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize:                  512000 ; 0x0c4: 0x0007d000
kfdhdb.pmcnt:                         6 ; 0x0c8: 0×00000006
kfdhdb.fstlocn:                       1 ; 0x0cc: 0×00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0×00000002
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0×00000002
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0×0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0×0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0×0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0×0000
kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi:             32983952 ; 0x0e4: HOUR=0×10 DAYS=0x1c MNTH=0×2 YEAR=0x7dd
kfdhdb.grpstmp.lo:           3909777408 ; 0x0e8: USEC=0×0 MSEC=0x29e SECS=0×10 MINS=0x3a
kfdhdb.vfstart:                       0 ; 0x0ec: 0×00000000
kfdhdb.vfend:                         0 ; 0x0f0: 0×00000000
kfdhdb.spfile:                        0 ; 0x0f4: 0×00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0×00000000
kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0×00000000
kfdhdb.ub4spare[1]:                   0 ; 0×100: 0×00000000
kfdhdb.ub4spare[2]:                   0 ; 0×104: 0×00000000
kfdhdb.ub4spare[3]:                   0 ; 0×108: 0×00000000
kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0×00000000
kfdhdb.ub4spare[5]:                   0 ; 0×110: 0×00000000
kfdhdb.ub4spare[6]:                   0 ; 0×114: 0×00000000
kfdhdb.ub4spare[7]:                   0 ; 0×118: 0×00000000
kfdhdb.ub4spare[8]:                   0 ; 0x11c: 0×00000000
kfdhdb.ub4spare[9]:                   0 ; 0×120: 0×00000000
kfdhdb.ub4spare[10]:                  0 ; 0×124: 0×00000000
kfdhdb.ub4spare[11]:                  0 ; 0×128: 0×00000000
kfdhdb.ub4spare[12]:                  0 ; 0x12c: 0×00000000
kfdhdb.ub4spare[13]:                  0 ; 0×130: 0×00000000
kfdhdb.ub4spare[14]:                  0 ; 0×134: 0×00000000
kfdhdb.ub4spare[15]:                  0 ; 0×138: 0×00000000
kfdhdb.ub4spare[16]:                  0 ; 0x13c: 0×00000000
kfdhdb.ub4spare[17]:                  0 ; 0×140: 0×00000000
kfdhdb.ub4spare[18]:                  0 ; 0×144: 0×00000000
kfdhdb.ub4spare[19]:                  0 ; 0×148: 0×00000000
kfdhdb.ub4spare[20]:                  0 ; 0x14c: 0×00000000
kfdhdb.ub4spare[21]:                  0 ; 0×150: 0×00000000
kfdhdb.ub4spare[22]:                  0 ; 0×154: 0×00000000
kfdhdb.ub4spare[23]:                  0 ; 0×158: 0×00000000
kfdhdb.ub4spare[24]:                  0 ; 0x15c: 0×00000000
kfdhdb.ub4spare[25]:                  0 ; 0×160: 0×00000000
kfdhdb.ub4spare[26]:                  0 ; 0×164: 0×00000000
kfdhdb.ub4spare[27]:                  0 ; 0×168: 0×00000000
kfdhdb.ub4spare[28]:                  0 ; 0x16c: 0×00000000
kfdhdb.ub4spare[29]:                  0 ; 0×170: 0×00000000
kfdhdb.ub4spare[30]:                  0 ; 0×174: 0×00000000
kfdhdb.ub4spare[31]:                  0 ; 0×178: 0×00000000
kfdhdb.ub4spare[32]:                  0 ; 0x17c: 0×00000000
kfdhdb.ub4spare[33]:                  0 ; 0×180: 0×00000000
kfdhdb.ub4spare[34]:                  0 ; 0×184: 0×00000000
kfdhdb.ub4spare[35]:                  0 ; 0×188: 0×00000000
kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0×00000000
kfdhdb.ub4spare[37]:                  0 ; 0×190: 0×00000000
kfdhdb.ub4spare[38]:                  0 ; 0×194: 0×00000000
kfdhdb.ub4spare[39]:                  0 ; 0×198: 0×00000000
kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0×00000000
kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0×00000000
kfdhdb.ub4spare[42]:                  0 ; 0x1a4: 0×00000000
kfdhdb.ub4spare[43]:                  0 ; 0x1a8: 0×00000000
kfdhdb.ub4spare[44]:                  0 ; 0x1ac: 0×00000000
kfdhdb.ub4spare[45]:                  0 ; 0x1b0: 0×00000000
kfdhdb.ub4spare[46]:                  0 ; 0x1b4: 0×00000000
kfdhdb.ub4spare[47]:                  0 ; 0x1b8: 0×00000000
kfdhdb.ub4spare[48]:                  0 ; 0x1bc: 0×00000000
kfdhdb.ub4spare[49]:                  0 ; 0x1c0: 0×00000000
kfdhdb.ub4spare[50]:                  0 ; 0x1c4: 0×00000000
kfdhdb.ub4spare[51]:                  0 ; 0x1c8: 0×00000000
kfdhdb.ub4spare[52]:                  0 ; 0x1cc: 0×00000000
kfdhdb.ub4spare[53]:                  0 ; 0x1d0: 0×00000000
kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0×00000000
kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0×00000000
kfdhdb.acdb.ents:                     0 ; 0x1dc: 0×0000
kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0×0000

Now, we can mount DATA diskgroup successfully.

I recommend that you should add ASM disk header information backup to your current backup plan.

I did not take a backup of damaged ASM disk header information before. So how was the recovery process above? There is a second copy of ASM disk header information. kfed repairs the corrupted block from second copy. If our second copy of the header information was corrupted also then we could repair the header information using backup.




AMDU = ASM Metadata Dump Utility

How to upgrade dataguard primary and standby databases

When you have dataguard setup in 10g version, would like to upgrade both primary & standby to 11g 
The idea is that you install the 11g binaries on both primary and standby;
shutdown the primary and the standby;
mount the standby off the 11g home,
 start the MRP; upgrade the primary.
 So the standby gets upgraded implicitly, by applying the redo generated during the upgrade of the primary.

Tuesday, April 22, 2014

Automatic db_file_multiblock_read_count (MBRC)

Automatic db_file_multiblock_read_count:

The default value corresponds to the maximum I/O size that can be efficiently performed and is platform-dependent

DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O during table scans. It specifies the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation.

As of Oracle Database 10g release 2, the default value of this parameter is a value that corresponds to the maximum I/O size that can be performed efficiently. This value is platform-dependent and is 1MB for most platforms.Because the parameter is expressed in blocks, it will be set to a value that is equal to the maximum I/O size that can be performed efficiently divided by the standard block size.
 
According to Oracle, this is the formula for setting db_file_multiblock_read_count:
                                 max I/O chunk size
db_file_multiblock_read_count  = -------------------
                                 db_block_size


MaxMultiblockread count=1MB/8K = 128 (approxmately)

Note that if the number of sessions is extremely large the multiblock read count value is decreased to avoid the buffer cache getting flooded with too many table scan buffers.

Online transaction processing (OLTP) and batch environments typically have values in the range of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from maximizing the value of this parameter. The optimizer is more likely to choose a full table scan over an index if the value of this parameter is high.
The maximum value is the operating system's maximum I/O size expressed as Oracle blocks ((max I/O size)/DB_BLOCK_SIZE). If you set this parameter to a value greater than the maximum, Oracle uses the maximum.

The "real" mbrc is actually somewhere in between since serial multiblock read requests are processed by the buffer cache and split in two or more requests if some blocks are already pinned in the buffer cache, or when the segment size is smaller than the read size.
The mbrc value gathered as part of workload statistics is thus useful for FTS estimation."

One of the new features that was released in 10gR2 is the automatic DB_FILE_MULTIBLOCK_READ_COUNT
The DB_FILE_MULTIBLOCK_READ_COUNT parameter controls the amount of block prefetching done in the buffer cache during scan operations, such as full table scan and index fast full scan. The value of this parameter can have a significant impact on the overall database performance. This feature enables Oracle Database to automatically select the appropriate value for this parameter depending on the operating system optimal I/O size and the size of the buffer cache.
This feature simplifies manageability by automating the tuning of DB_FILE_MULTIBLOCK_READ_COUNT initialization parameter.

This parameter specifies the number of blocks that are read in a single I/O during a full table scan or index fast full scan. The optimizer uses the value of DB_FILE_MULTIBLOCK_READ_COUNT to cost full table scans and index fast full scans. Larger values result in a cheaper cost for full table scans and can result in the optimizer choosing a full table scan over an index scan. If this parameter is not set explicitly (or is set is 0), the optimizer will use a default value of 8 when costing full table scans and index fast full scans.


Statistics collection in oracle 11gR2

Oracle has addressed 2 major issues in statistics gathering. 1) Automatic statistics collection 2) Speed and accuracy by changing the sample data 1) Automatic statistics collection: In 11gR2 statistics collection is automatic by default.The statistics will be collected by using an auto job, if the statistics found to be stale for an object or if any DDL command used against the object. Please see the below for the job details. select client_name,status,consumer_group,attributes from DBA_AUTOTASK_CLIENT CLIENT_NAME STATUS CONSUMER_GROUP ------------ --------- -------------- auto optimzer stat collection ENABLED ORA$AUTOTASK_STATS_GROUP The stale percent is determined based on the number of DML operations occured on the table ie., the percentage of number of rows updated or inserted or deleted on the table. By default the threshold limit for the stale percentage is 10%. You can check this values by using the below query. select dbms_stats.GET_PREFS ('STALE_PERCENT','OWNER','TABLENAME') from dual. To check the objects considered for auto stats collection we can check the parameter AUTOSTATS_TARGET.The value of this parameter controls the objects considered for stats collection. The values: 'ALL' - Statistics collected for all objects in system 'ORACLE' - Statistics collected for all Oracle owned objects 'AUTO' - Oracle decides on which objects to collect statistics We can check this by using the below query select dbms_stats.GET_PREFS ('AUTOSTATS_TARGET') from dual; DBMS_STATS.GET_PREFS('AUTOSTATS_TARGET') ------------------------------------------- AUTO To check the percentage of rows modified or inserted or deleted on the tables you can use the below query select * from DBA_TAB_MODIFICATIONS where table_owner='SCOTT' If you need to check the information about stats collection on your tables, you can use the below query select * from DBA_TAB_STATS_HISTORY where owner='SCOTT' select client_name, JOB_SCHEDULER_STATUS from DBA_AUTOTASK_CLIENT_JOB where client_name='%stats%'; To enable or disable the Auto stats job please use the below procs. To enable or disable the Auto stats job please use the below procs. ENABLE: BEGIN DBMS_AUTO_TASK_ADMIN.ENABLE( client_name => 'auto optimizer stats collection', operation => NULL, window_name => NULL); END; / DISABLE: BEGIN DBMS_AUTO_TASK_ADMIN.DISABLE( client_name => 'auto optimizer stats collection', operation => NULL, window_name => NULL); END; / SELECT client_name, window_name, jobs_created, jobs_started, jobs_completed FROM dba_autotask_client_history WHERE client_name like '%stats%'; SELECT * FROM dba_autotask_client_history WHERE client_name like '%stats%'; Select client_name, JOB_SCHEDULER_STATUS from DBA_AUTOTASK_CLIENT_JOB where client_name='auto optimizer stats collection'; 2) Speed and accuracy by changing the sample data: Looking back to the previous major release (10.2.0) the methodology for statistics gathering relied basically on the sample size; the larger the sample size the better the statistics at the cost of increased run time to gather those statistics. Oracle also introduced the auto sample size default, which determined the sample size to use for each table. Using the default settings did produce statistics in a shorter length of time, but did so at the expense of inaccurate histograms, plainly wrong histograms, and inaccurate density values. Partitioned tables were the most often affected when sample sizes smaller than 90% were used; the auto sample size Oracle computed often hit the 30% mark leaving histogram values woefully in need of adjustment. Even though Oracle would adjust the sampled values up at the table level (to more accurately reflect the actual number of rows) the histogram values remained at the actual sampled size throwing density/selectivity calculations off by a considerable margin and adversely affecting performance based on cost calculations. Oracle 11.2.0 has corrected these issues with a better methodology that not only increases the speed of statistics collection but also provides greater accuracy of the results. An improved sampling algorithm along with a reduction of the full table scans executed has resulted in more accurate statistics in a shorter amount of time, and better statistics definitely mean better execution plans. That isn't the only improvement in 11.2.0 statistics as the histograms generated have accurate numbers, better reflecting the state of the table data distribution. Partitioned tables no longer cause Oracle to generate spurious histograms such as the frequency histograms on single-valued, non-null columns making CBO calculations more accurate. Better statistics mean better plans and less of a headache for the users and the DBAs. Notice also that the auto sample size in 11.2.0 generates statistics percentages very close to 100, indicating that the new auto sampling algorithm will run faster yet produce results rivaling a 100% sample size and do that in approximately 1/10th of the time. The less time a DBA spends on statistics is more time to address other, more pressing, issues. That's a good thing no matter how you look at it.

Monday, March 10, 2014

Enable/disable oracle components using CHOPT utility in oracle 11gR2

Enable/disable oracle components usinig chopt command like utility:


From 11g relase 2, we can enable/disable oracle components using the command line utility called CHOPT.
This utility is located in $ORACLE_HOME/bin location.

In older versions, In order to enable/disable oracle components we have to use  OUI(till 11.1) or we need to enable/disable component at binary level using relink (make) cmd(11.1.0.7)

But from 11g release 2,
Using chopt utility we can enable/disable following components:

dm = Oracle Data Mining RDBMS Files
dv = Oracle Database Vault option
lbac = Oracle Label Security
olap = Oracle OLAP
partitioning = Oracle Partitioning
rat = Oracle Real Application Testing

syntax:  chopt [ enable | disable] db_option

bash-3.00$ chopt
usage:
chopt

options:
dm = Oracle Data Mining RDBMS Files
dv = Oracle Database Vault option
lbac = Oracle Label Security
olap = Oracle OLAP
partitioning = Oracle Partitioning
rat = Oracle Real Application Testing

steps to enable/disable oracle components

1) shutdown the database for which do you want to enable/disable the oracle component.
2) enable/disable the component using chopt command.
3) startup the database and verify the component installed using v$option view.

Eg: Enable partitioning for database

select * from v$option where parameter=’ Partitioning’;

PARAMETER        VALUE
----------------------- -----------
Partitioning      FALSE

shut immediate

cd $ORACLE_HOME/bin

bash-3.00$ chopt enable Partitioning

startup

select * from v$option where parameter=’ Partitioning’;

PARAMETER        VALUE
----------------------- -----------
Partitioning      TRUE


Partitioning new features 12c

* Can move table partitions from one TS to anothes TS online:

The Oracle 12c Database has introduced a number of great new capabilities associated with online DDL activities. One of these really useful features is the capability to now move table partitions online while maintaining associated indexes.

If in one session we have an active transaction (i.e. not yet committed):
while we move a table partition in another session as we did previously:

SQL> alter table muse move partition p3 tablespace users update indexes;
  
alter table muse move partition p3 tablespace users update indexes
  
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired


we now use the new 12c UPDATE INDEXES ONLINE clause:


SQL> alter table muse move partition p3 tablespace users update indexes online;


The session now simply hangs until the transaction in session one completes, in a similar manner to that of an index rebuild online operation.

Saturday, March 8, 2014

Upgrade DST timezone version from 4 to 14 after upgrading the db from 10.2.0.4 to 11.2.0.3

After upgrading the db from 10.2.0.4 to 11.2.0.3 find the timezone DST TZ version using the command,

SQL> SELECT version FROM v$timezone_file;


   VERSION
----------
         4

this value is less than 14, So we have to upgrade to 14.

SQL>  SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
  2  FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;

PROPERTY_NAME                  VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION         4
DST_SECONDARY_TT_VERSION       0
DST_UPGRADE_STATE              NONE


check for the timezone .dat files 

/opt/oracle/product/11.2.0.3/dbs > ls -ltr $ORACLE_HOME/oracore/zoneinfo/time*14*
-rw-r--r--   1 oracle   dba         336K Aug 21  2011 /opt/oracle/product/11.2.0.3/oracore/zoneinfo/timezone_14.dat
-rw-r--r--   1 oracle   dba         773K Aug 21  2011 /opt/oracle/product/11.2.0.3/oracore/zoneinfo/timezlrg_14.dat
/opt/oracle/product/11.2.0.3/dbs >


Connect as SYS user and prepare for the upgrade.

SQL> exec DBMS_DST.BEGIN_PREPARE(14);
PL/SQL procedure successfully completed


SQL> SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;

PROPERTY_NAME                  VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION         4
DST_SECONDARY_TT_VERSION       14
DST_UPGRADE_STATE              PREPARE

Now you will see 14 as secondary timezone.

Truncate all the DST affected tables:

SQL> TRUNCATE TABLE SYS.DST$TRIGGER_TABLE;
Table truncated.
SQL> TRUNCATE TABLE sys.dst$affected_tables;
Table truncated.
SQL> TRUNCATE TABLE sys.dst$error_table;
Table truncated.



Run the below PL/SQL code to log errors for DST affected tables.

SQL> BEGIN
DBMS_DST.FIND_AFFECTED_TABLES
(affected_tables => 'sys.dst$affected_tables',
log_errors => TRUE,
log_errors_table => 'sys.dst$error_table');
END;
/  2    3    4    5    6    7

PL/SQL procedure successfully completed.

Following run should return no rows which confirms that no DST tables affected during the operation.

SQL> SELECT * FROM sys.dst$affected_tables;
no rows selected


End the prepare.

SQL>  EXEC DBMS_DST.END_PREPARE;
PL/SQL procedure successfully completed.

Shut the database and startup in UPGRADE mode.

SQL> shut immediate;

Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup upgrade
ORACLE instance started.

Total System Global Area 3424985088 bytes
Fixed Size                  2163120 bytes
Variable Size            2315263568 bytes
Database Buffers         1090519040 bytes
Redo Buffers               17039360 bytes
Database mounted.
Database opened.
SQL>


Execute the DST upgrade.

SQL> EXEC DBMS_DST.BEGIN_UPGRADE(14);

PL/SQL procedure successfully completed.

SQL>  SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;

PROPERTY_NAME                  VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION         14
DST_SECONDARY_TT_VERSION       4
DST_UPGRADE_STATE              UPGRADE

SQL>


Shut the database and Startup the database in NORMAL mode.

SQL> SELECT version FROM v$timezone_file;
VERSION
———-
14



============