Starting with Oracle Database 11g Release 2, Oracle Clusterware and Oracle ASM are installed into a single home directory, which is called the Grid home. Oracle Grid Infrastructure for a cluster refers to the installation of the combined products. Oracle Clusterware and Oracle ASM are still individual products, and are referred to by those names.
About ASM: Oracle ASM is an integrated, high-performance volume manager and file system. With Oracle Database 11g Release 2, Oracle ASM adds support for storing the Oracle Clusterware OCR and voting disk files, also a general purpose cluster file system called Oracle Automatic Storage Management Cluster File System (Oracle ACFS).
Oracle ASM is based on the principle that the database should manage storage instead of requiring an administrator to do it. Oracle ASM eliminates the need for you to directly manage potentially thousands of Oracle database files. Oracle ASM groups the disks in your storage system into one or more disk groups. You manage a small set of disk groups and Oracle ASM automates the placement of the database files within those disk groups.
Oracle Cluster Registry (OCR) and voting disks can also be placed on Oracle ASM diskgroups. When using Oracle Real Application Clusters (Oracle RAC), each instance must have access to the data files and recovery files for the Oracle RAC database. Using Oracle Automatic Storage Management (Oracle ASM) is an easy way to satisfy this requirement
Striping—Oracle ASM spreads data evenly across all disks in a disk group to optimize performance and utilization. This even distribution of database files eliminates the need for regular monitoring and I/O performance tuning.
Mirroring—Oracle ASM increases data availability by optionally mirroring any file. Oracle ASM mirrors at the file level, unlike operating system mirroring, which mirrors at the disk level. Mirroring means keeping redundant copies, or mirrored copies, of each extent of the file, to help avoid data loss caused by disk failures. The mirrored copy of each file extent is always kept on a different disk from the original copy. If a disk fails, then Oracle ASM continues to access affected files by accessing mirrored copies on the surviving disks in the disk group.
Online storage reconfiguration and dynamic rebalancing When you add a disk to a disk group, Oracle ASM automatically redistributes the data so that it is evenly spread across all disks in the disk group, including the new disk. The process of redistributing data so that it is also spread across the newly added disks is known as rebalancing. It is done in the background and with minimal impact to database performance Managed file creation and deletion Oracle ASM automatically assigns file names when files are created, and automatically deletes files when they are no longer needed by the database
ASM Instance: Oracle ASM is implemented as a special kind of Oracle instance, with its own System Global Area and background processes. The Oracle ASM instance is tightly integrated with Oracle Clusterware and Oracle Database. Every server running one or more database instances that use Oracle ASM for storage has an Oracle ASM instance. In an Oracle RAC environment, there is one Oracle ASM instance for each node, and the Oracle ASM instances communicate with each other on a peer-to-peer basis. Only one Oracle ASM instance is supported on a node, but you can have multiple database instances that use Oracle ASM residing on the same node RAC: Oracle RAC extends Oracle Database so that you can store, update, and efficiently retrieve data using multiple database instances on different servers at the same time.
Oracle RAC provides the software that manages multiple servers and instances as a single group. The data files that comprise the database must reside on shared storage that is accessible from all servers that are part of the cluster. Each server in the cluster runs the Oracle RAC software. An Oracle Database database has a one-to-one relationship between data files and the database instance. An Oracle RAC database, however, has a one-to-many relationship between data files and database instances. In an Oracle RAC database, multiple instances access a single set of database files.
Cache Fusion: Oracle RAC uses Cache Fusion to synchronize the data stored in the buffer cache of each database instance. Cache Fusion moves current data blocks (which reside in memory) between database instances, rather than having one database instance write the data blocks to disk and requiring another database instance to reread the data blocks from disk. When a data block located in the buffer cache of one instance is required by another instance, Cache Fusion transfers the data block directly between the instances using the interconnect, enabling the Oracle RAC database to access and modify data as if the data resided in a single buffer cache. Globla Enque process (GES & GCS) is the service which is responsible for the cache fusion.
Oracle RAC One Node: Oracle Real Application Clusters One Node (Oracle RAC One Node) is a single instance of an Oracle RAC database that runs on one node in a cluster. This feature allows you to consolidate many databases into one cluster with minimal overhead, protecting them from both planned and unplanned downtime. The consolidated databases reap the high availability benefits of failover protection, online rolling patch application, and rolling upgrades for the operating system and Oracle Clusterware. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2). Oracle RAC One Node enables better availability than cold failover for single-instance databases because of the Oracle technology called online database relocation, which intelligently migrates database instances and connections to other cluster nodes for high availability and load balancing. Online database relocation is performed using the Server Control Utility (SRVCTL). If you run your applications on Oracle RAC One Node, and your applications grow to the point that a single node cannot supply the resources they need, then Oracle RAC One Node can be upgraded online to Oracle Real Application Clusters. If the node running your Oracle RAC One Node database becomes overloaded, then you can migrate the database instance to another node in the cluster using online database relocation with no downtime for application users.
Tools for managing oracle RAC:
Cluster Verification Utility (CVU)— CVU is a command-line tool that you can use to verify a range of cluster and Oracle RAC components such as shared storage devices, networking configurations, system requirements, and Oracle Clusterware, also operating system groups and users. You can use CVU for preinstallation and postinstallation checks of your cluster environment. CVU is especially useful during preinstallation and during installation of Oracle Clusterware and Oracle RAC components. OUI runs CVU after the Oracle Clusterware installation to verify your environment.
Server Control (SRVCTL)—SRVCTL is a command-line interface that you can use to manage the resources defined in the Oracle Cluster Registry (OCR). They resources include the node applications, called nodeapps, that comprise Oracle Clusterware, which includes the Oracle Notification Service (ONS), the Global Services Daemon (GSD), and the Virtual IP (VIP). Other resources that can be managed by SRVCTL include databases, instances, listeners, services, and applications. Using SRVCTL you can start and stop nodeapps, databases, instances, listeners, and services, delete or move instances and services, add services, and manage configuration information.
Cluster Ready Services Control (CRSCTL)—CRSCTL is a command-line tool that you can use to manage Oracle Clusterware daemons. These daemons include Cluster Synchronization Services (CSS), Cluster-Ready Services (CRS), and Event Manager (EVM). You can use CRSCTL to start and stop Oracle Clusterware and to determine the current status of your Oracle Clusterware installation.
Oracle Automatic Storage Management Command Line utility (ASMCMD)—ASMCMD is a command-line utility that you can use to manage Oracle ASM instances, Oracle ASM disk groups, file access control for disk groups, files and directories within Oracle ASM disk groups, templates for disk groups, and Oracle ASM volumes.
Oracle Clusterware achieves superior scalability and high availability by using the following components:
Voting disk–Manages cluster membership and arbitrates cluster ownership between the nodes in case of network failures. The voting disk is a file that resides on shared storage. For high availability, Oracle recommends that you have multiple voting disks, and that you have an odd number of voting disks. If you define a single voting disk, then use mirroring at the file system level for redundancy.
Oracle Cluster Registry (OCR)–Maintains cluster configuration information and configuration information about any cluster database within the cluster. The OCR contains information such as which database instances run on which nodes and which services run on which databases. The OCR also stores information about processes that Oracle Clusterware controls. The OCR resides on shared storage that is accessible by all the nodes in your cluster. Oracle Clusterware can multiplex, or maintain multiple copies of, the OCR and Oracle recommends that you use this feature to ensure high availability.
In previous releases, to make use of redundant networks for the interconnect, bonding, trunking, teaming, or similar technology was required. Oracle Grid Infrastructure for a cluster and Oracle RAC can now make use of redundant network interconnects, without the use of other network technology, to enhance optimal communication in the cluster. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2). Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter eth0, then you must configure eth0 as the public interface on all nodes. Network interface names are case-sensitive. You should configure the same private interface names for all nodes as well. If eth1 is the private interface name for the first node, then eth1 should be the private interface name for your second node. Network interface names are case-sensitive.
SCAN(Single Client Access Name): During installation a SCAN for the cluster is configured, which is a domain name that resolves to all the SCAN addresses allocated for the cluster. The IP addresses used for the SCAN addresses must be on the same subnet as the VIP addresses. The SCAN must be unique within your network. The SCAN addresses should not respond to ping commands before installation. During installation of the Oracle Grid Infrastructure for a cluster, a listener is created for each of the SCAN addresses. Clients that access the Oracle RAC database should use the SCAN or SCAN address, not the VIP name or address. If an application uses a SCAN to connect to the cluster database, then the network configuration files on the client computer do not have to be modified when nodes are added to or removed from the cluster. The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that form the cluster. Clients can connect to the cluster database using the easy connect naming method and the SCAN. The fully qualified SCAN for the cluster defaults to cluster_name-scan.GNS_ subdomain_name, for example docrac-scan.example.com. The short SCAN for the cluster is docrac-scan. You can use any name for the SCAN, if it is unique within your network and conforms to the RFC 952 stand.
To determine if the operating system requirements for Oracle Linux have been met:
1. To determine which distribution and version of Linux is installed, run the following command at the operating system prompt as the root user: # cat /proc/version.
2. To determine which chip architecture each server is using and which version of the software you should install, run the following command at the operating system prompt as the root user: # uname -m
This command displays the processor type. For a 64-bit architecture, the output would be "x86_64".
Determine your cluster name. The cluster name should satisfy the following conditions:
■ The cluster name is globally unique throughout your host domain.
■ The cluster name is at least one character long and less than 15 characters long.
■ The cluster name must consist of the same character set used for host names: single-byte alphanumeric characters (a to z, A to Z, and 0 to 9) and hyphens (-).
■ If you use third-party vendor clusterware, then Oracle recommends that you use the vendor cluster name.
ASMLib: Using ASMLib to Mark the Shared Disks as Candidate Disks Another option for configuring shared disks is to use the ASMLib utility. If you configure a shared disk to be mounted automatically when the server restarts, then, unless you have configured special files for device persistence, a disk that appeared as /dev/sdg before the system shutdown can appear as /dev/sdh after the system is restarted.
If you use ASMLib to configure the shared disks, then when you restart the node:
■ The disk device names do not change
■ The ownership and group membership for these disk devices remains the same
■ You can copy the disk configuration implemented by Oracle ASM to other nodes in the cluster by running a simple command.
Installing ASMLib: To install the ASMLib software packages:
1. Download the ASMLib packages to each node in your cluster.
2. Change to the directory where the package files were downloaded.
3. As the root user, use the rpm command to install the packages. For example:
# rpm -Uvh oracleasm-support-2.1.3-1.el4.x86_64.rpm
# rpm -Uvh oracleasmlib-2.0.4-1.el4.x86_64.rpm
# rpm -Uvh oracleasm-2.6.9-55.0.12.ELsmp-2.0.3-1.x86_64.rpm
After you have completed these commands, ASMLib is installed on the system. 4. Repeat steps 2 and 3 on each node in your cluster.
configuring ASMLib: Now that the ASMLib software is installed, a few steps have to be taken by the system administrator to make the Oracle ASM driver available. The Oracle ASM driver must be loaded, and the driver file system must be mounted. This is taken care of by the initialization script, /usr/sbin/oracleasm.
To configure the ASMLib software after installation:
1. As the root user, run the following command:
# /usr/sbin/oracleasm configure
The script prompts you for the default user and group to own the Oracle ASM driver access point. Specify the Oracle Database software owner (oracle) and the OSDBA group (dba).
Repeat step 1 on each node in your cluster. Using ASMLib to Create Oracle ASM Disks Every disk that is used in an Oracle ASM disk group must be accessible on each node. After you make the physical disk available to each node, you can then mark the disk device as an Oracle ASM disk. The /usr/sbin/oracleasm script is used for this task.
To create Oracle ASM disks using ASMLib:
1. As the root user, use oracleasm to create Oracle ASM disks using the following syntax: # /usr/sbin/oracleasm createdisk disk_name device_partition_name In this command, disk_name is the name you choose for the Oracle ASM disk. The name you choose must contain only ASCII capital letters, numbers, or underscores, and the disk name must start with a letter, for example, DISK1 or VOL1, or RAC_FILE1. The name of the disk partition to mark as an Oracle ASM disk is the device_partition_name. For example: # /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1 If you must unmark a disk that was used in a createdisk command, then you can use the following syntax: # /usr/sbin/oracleasm deletedisk disk_name.
2. Repeat step 1 for each disk that is used by Oracle ASM.
3. After you have created all the Oracle ASM disks for your cluster, use the listdisks command to verify their availability: # /usr/sbin/oracleasm listdisks DISK1 DISK2 DISK3.
4. On all the other nodes in the cluster, use the scandisks command to view the newly created Oracle ASM disks.You do not have to create the Oracle ASM disks on each node, only on one node in the cluster. # /usr/sbin/oracleasm scandisks Scanning system for ASM disks
No comments:
Post a Comment