Until Oracle finally releases its much awaited-for Universal FileSystem, the only way to install grid infrastructure on shared storage is still ocfs2, which you may find useful as a regular cluster filesystem, too.

Download the rpms for Red Hat from
http://oss.oracle.com/projects/ocfs2/

For a 64-bit platform, you will need these ones:

( Do a uname -r to check which is your platform)

ocfs2-2.6.18-128.el5-1.4.2-1.el5.x86_64.rpm
ocfs2-tools-1.4.2-1.el5.x86_64.rpm
ocfs2console-1.4.2-1.el5.x86_64.rpm

# rpm -Uvh ocfs2-tools-1.4.2-1.el5.x86_64.rpm ocfs2-2.6.18-128.el5-1.4.2-1.el5.x86_64.rpm ocfs2console-1.4.2-1.el5.x86_64.rpm

You might have to install pygtk and vte first
# yum install vte.x86_64
# yum install pygtk2.x86_64

Contrarily to what the install doc states, you will first have to edit the /etc/ocfs2/cluster.conf by hand before being able to do anything.

cluster:
       node_count =1
       name=ocfs2
node:
        ip_port = 7777
        ip_address = my_cluster_node_1_interconnect_ip_adress
        number = 1
        name = my_cluster_node_1_hostname
        cluster = ocfs2


Once you've edited the file on one of the nodes, you're not done yet. Do a:

# service o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2: ocfs2
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: OK

Then only you may start the graphic ocfs2 console:

# ocfs2console

In the GUI, go to Edit-> Add node, and add your second node, with its interconnect ip address. Validate.

Go to Edit -> Propagate Configuration.

By now, you should see the following configuration on your two nodes.

node:
        ip_port = 7777
        ip_address = my_cluster_node_1_interconnect_ip_adress
        number = 1
        name = my_cluster_node_1_hostname
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = my_cluster_node_2_interconnect_ip_adress
        number = 2
        name = my_cluster_node_2_hostname
        cluster = ocfs2

cluster:
       node_count =3
       name=ocfs2



Do a:
# service o2cb configure
on the second node

Check if the service is finally up and running running on both nodes:

# ps -ef | grep o2
root     24816   153  0 17:27 ?        00:00:00 [o2net]
root     24891 18206  0 17:27 pts/0    00:00:00 grep o2

Then, you may go on formatting the volume you've prepared on your shared storage.

Here, the volume is configured under Linux with Device-Mapper multipath, and is seen under /dev/mapper as VOL1.

# mkfs.ocfs2 -c 4K -C 4K -L "ocfs2volume1" /dev/mapper/VOL1


Then, you may  just create a mount point on which to mount the volume on both nodes, /u01/app/ocfs2mounts/grid for example, if you're planning on installing Oracle grid infrastructure.

Mount the filesystem on both nodes

# mount /dev/mapper/VOL1 /u01/app/ocfs2mounts/grid

Drop me a line, or have a look at the links, if this post has been useful to you.

Happy computing

Nixman.