RAC Filesystem Options

RAC Review

Let’s begin by reviewing the structure of a Real Applications Cluster. Physically, a RAC consists of several nodes (servers), connected to each other by a private interconnect. The database files are kept on a shared storage subsystem, where they’re accessible to all nodes. And each node has a public network connection.

In terms of software and configuration, the RAC has three basic components: cluster software and/or Cluster Ready Services, database software, and a method of managing the shared storage subsystem.

  • The cluster software can be vendor-supplied or Oracle-supplied, depending on platform. Cluster Ready Services, or CRS Where vendor clusterware is used, CRS interacts with the vendor clusterware to coordinate cluster membership information; without vendor clusterware, CRS, which is also known as Oracle OSD Clusterware, provides complete cluster management.
  • The database software with the RAC option, of course.
  • Finally, the shared storage subsystem can be managed by one of the following options: raw devices; Automatic Storage Management (ASM); Vendor-supplied cluster file system (CFS), Oracle Cluster File System (OCFS), or vendor-supplied logical volume manager (LVM); or Networked File System (NFS) on a certified Network Attached Storage (NAS) device.

Storage Options

Let me clarify the foregoing alphabet soup with a table:

Table 1. Storage options for the shared storage subsystem.
Abbrev. Storage Option
Raw Raw devices, no filesystem
ASM Automatic Storage Management
CFS Cluster File System
OCFS Oracle Cluster File System
LVM Logical Volume Manager
NFS Network File System (must be on certified NAS device)

Before I delve into each of these storage options, a word about file types. A regular single-instance database has three basic types of files: database software and dump files; datafiles, spfile, control files and log files, often referred to as “database files”; and it may have recovery files, if using RMAN. A RAC database has an additional type of file referred to as “CRS files”. These consist of the Oracle Cluster Registry (OCR) and the voting disk.

Not all of these files have to be on the shared storage subsystem. The database files and CRS files must be accessible to all instances, so must be on the shared storage subsystem. The database software can be on the shared subsystem and shared between nodes; or each node can have its own ORACLE_HOME. The flash recovery area must be shared by all instances, if used.

Some storage options can’t handle all of these file types. To take an obvious example, the database software and dump files can’t be stored on raw devices. This isn’t important for the dump files, but it does mean that choosing raw devices precludes having a shared ORACLE_HOME on the shared storage device.

And to further complicate the picture, no OS platform is certified for all of the shared storage options. For example, only Linux and SPARC Solaris are supported with NFS, and the NFS must be on a certified NAS device. The following table spells out which platforms and file types can use each storage option.

Table 2. Platforms and file types able to use each storage option
Storage option Platforms File types supported File types not supported
Raw All platforms Database, CRS Software/Dump files, Recovery
ASM All platforms Database, Recovery CRS, Software/Dump
Certified Vendor CFS AIX, HP Tru64 UNIX, SPARC Solaris All None
LVM HP-UX, HP Tru64 UNIX, SPARC Solaris All None
OCFS Windows, Linux Database, CRS, Recovery Software/Dump files
NFS Linux, SPARC Solaris All None

Now that we have an idea of where we can use these storage options, let’s examine each option in a little more detail. We’ll tackle them in order of Oracle’s recommendation, starting with Oracle’s least preferred, raw devices, and finishing up with Oracle’s top recommendation, ASM.

Raw devices

Raw devices need little explanation. As with single-instance Oracle, each tablespace requires a partition. You will also need to store your software and dump files elsewhere.

Pros: You won’t need to install any vendor or Oracle-supplied clusterware or additional drivers.
Cons: You won’t be able to have a shared oracle home, and if you want to configure a flash recovery area, you’ll need to choose another option for it. Manageablility is an issue. Further, raw devices are a terrible choice if you expect to resize or add tablespaces frequently, as this involves resizing or adding a partition. Adding and resizing datafiles is not trivial. On some operating systems volume groups need to deactivated before LVs can be manipulated or added.

NFS

NFS also requires little explanation. It must be used with a certified NAS device; Oracle has certified a number of NAS filers with its products, including products from EMC, HP, NetApp and others. NFS on NAS can be a cost-effective alternative to a SAN for Linux and Solaris, especially if no SAN hardware is already installed.

Pros: Ease of use and relatively low cost.
Cons: Not suitable for all deployments. Analysts recommend SANs over NAS for large-scale transaction-intensive applications, although there’s disagreement on how big is too big for NAS.

Vendor CFS and LVMs

If you’re considering a vendor CFS or LVM, you’ll need to check the Real Application Clusters Installation Guide for your platform and the Certify pages on Oracle Support. A discussion of all the certified cluster file systems is beyond the scope of this article. Pros and cons depend on the specific solution, but some general observations can be made:

Pros: You can store all types of files associated with the instance on the CFS / logical volumes.
Cons: Depends on CFS / LVM. And you won’t be enjoying the manageability advantage of ASM.

OCFS (CFS)

OCFS is the Oracle-supplied CFS for Linux and Windows. This is the only CFS that can be used with these platforms. The current version of OCFS was designed specifically to store RAC files, and is not a full-featured CFS. You can store database, CRS and recovery files on it. The idea of CFS is to basically share file filesystems between nodes.

Pros: Provides a CFS option for Linux and Windows.  Easy to manage but only available on some platforms
Cons: CFS is configured on shared storage.  Each node must have access to the storage in which the CFS is mounted.. Easier to manage than raw devices, but not as manageable as NFS or ASM.
NOT available on all platforms.  Supported CFS solutions currently:
* OCFS on Linux and Windows (Oracle)
* DBE/AC CFS (Veritas)
* GPFS (AIX)
* Tru64 CFS (HP Tru64)
* Solaris QFS

ASM

Oracle recommends ASM for 10g RAC deployments, although CRS files cannot be stored on ASM. In fact, RAC installations using Oracle Database Standard Edition must use ASM.

ASM is a little bit like a logical volume manager and provides many of the benefits of LVMs. But it also provides benefits LVMs don’t: file-level striping/mirroring, and ease of manageability. Instead of running LVM software, you run an ASM instance, a new type of “instance” that largely consists of processes and memory and stores its information in the ASM disks it’s managing.
* Stripes files rather than logical volumes.
* Enables online disk reconfiguration and dynamic rebalancing.
* Provides adjustable re balancing speed.
* Provides redundancy on a file basis.
* Supports only Oracle files.
* Is cluster aware.
* Is automatically installed as part of the base code set .

Pros: File-level striping and mirroring; ease of manageability through Oracle syntax and OEM.
Cons: ASM files can only be managed through an Oracle application such as RMAN. This can be a weakness if you prefer third-party backup software or simple backup scripts. Cannot store CRS files or database software.