UNIX tricks and treats

Aller au contenu | Aller au menu | Aller à la recherche

mercredi 25 août 2010

A handy command to monitor Linux multipath

Works on: Red Hat 5.3 with Qlogic fiber channel cards

Monitoring failing paths on a fibre channel card connected to a SAN on Linux isn't very straightforward

A handy command to check it in real time would be this one:

watch -n 1 "echo show paths | multipathd -k "

The output would look something like this:

multipathd> hcil    dev  dev_t  pri dm_st   chk_st   next_check

1:0:3:3 sdam 66:96  50  [failed][faulty] XX........ 4/20
1:0:3:4 sdan 66:112 50  [failed][faulty] XX........ 4/20
0:0:0:0 sda  8:0    50  [active][ready]  XXXXXXXX.. 17/20
0:0:0:1 sdb  8:16   10  [active][ready]  XXXXXXXX.. 17/20
0:0:0:2 sdc  8:32   50  [active][ready]  XXXXXXXX.. 17/20


Here, controller 1 is failing, resulting in 4 failed paths out of 8.

"4/20" and "17/20" being the number of secons left till the next check

Leave me a note if this post has been useful to you

Happy computing


lundi 16 novembre 2009

Nombre de processeurs physiques sur serveur Red Hat Linux

Des outils comme top affichent le nombre de coeurs, ou le nombre de threads, et non le nombre de processurs physiques d'un serveur.

Afin d'obtenir le nombre de processurs physiques, il faut taper la commande suivante:

$ cat /proc/cpuinfo | grep "physical id"
physical id     : 0
physical id     : 2
physical id     : 0
physical id     : 2

Dans ce cas-ci, nous avons deux processurs physiques: 0 et 2

lundi 21 septembre 2009

Installing ocfs2 filesystem on RHEL 5.3

Until Oracle finally releases its much awaited-for Universal FileSystem, the only way to install grid infrastructure on shared storage is still ocfs2, which you may find useful as a regular cluster filesystem, too.

Download the rpms for Red Hat from

For a 64-bit platform, you will need these ones:

( Do a uname -r to check which is your platform)


# rpm -Uvh ocfs2-tools-1.4.2-1.el5.x86_64.rpm ocfs2-2.6.18-128.el5-1.4.2-1.el5.x86_64.rpm ocfs2console-1.4.2-1.el5.x86_64.rpm

You might have to install pygtk and vte first
# yum install vte.x86_64
# yum install pygtk2.x86_64

Contrarily to what the install doc states, you will first have to edit the /etc/ocfs2/cluster.conf by hand before being able to do anything.

       node_count =1
        ip_port = 7777
        ip_address = my_cluster_node_1_interconnect_ip_adress
        number = 1
        name = my_cluster_node_1_hostname
        cluster = ocfs2

Once you've edited the file on one of the nodes, you're not done yet. Do a:

# service o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2: ocfs2
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: OK

Then only you may start the graphic ocfs2 console:

# ocfs2console

In the GUI, go to Edit-> Add node, and add your second node, with its interconnect ip address. Validate.

Go to Edit -> Propagate Configuration.

By now, you should see the following configuration on your two nodes.

        ip_port = 7777
        ip_address = my_cluster_node_1_interconnect_ip_adress
        number = 1
        name = my_cluster_node_1_hostname
        cluster = ocfs2

        ip_port = 7777
        ip_address = my_cluster_node_2_interconnect_ip_adress
        number = 2
        name = my_cluster_node_2_hostname
        cluster = ocfs2

       node_count =3

Do a:
# service o2cb configure
on the second node

Check if the service is finally up and running running on both nodes:

# ps -ef | grep o2
root     24816   153  0 17:27 ?        00:00:00 [o2net]
root     24891 18206  0 17:27 pts/0    00:00:00 grep o2

Then, you may go on formatting the volume you've prepared on your shared storage.

Here, the volume is configured under Linux with Device-Mapper multipath, and is seen under /dev/mapper as VOL1.

# mkfs.ocfs2 -c 4K -C 4K -L "ocfs2volume1" /dev/mapper/VOL1

Then, you may  just create a mount point on which to mount the volume on both nodes, /u01/app/ocfs2mounts/grid for example, if you're planning on installing Oracle grid infrastructure.

Mount the filesystem on both nodes

# mount /dev/mapper/VOL1 /u01/app/ocfs2mounts/grid

Drop me a line, or have a look at the links, if this post has been useful to you.

Happy computing


vendredi 18 septembre 2009

Discrepancies and catchas in Oracle 11gR2 grid infrastructure install guide for Linux

Discrepancies in Oracle 11gR2 grid infrastructure install guide for Linux:

- Oracle instructs you to create VIP's on both nodes as a preinstall task.
However, if you do so, Oracle grid infrastructure installer will tell you the VIP adresses are already in use.

- Even though you have set up passwordless ssh connectivity between two RAC nodes, the installer keeps telling you this is not the case. I guess it has something to do with Oracle using rsa1. I gave up and gave both my oracle users the same password, and clicked on "setup", and let the installer do it for me. Everything went fine afterwards.

- /usr/sbin/oracleasm enable and disable have been deprecated on RHEL 5.3.
You have to use chckconfig oracleasm on.
If you fail to do so, upon reboot, asmlib is not loaded, and your voting disks and OCR are corrupted.

- If you use ACFS have to use different ORACLE_BASE directories for the Oracle grid infrastructure user (ex:grid: /u01/app/grid/), and the Oracle database user (ex: oracle: /u01/app/oracle/).
In the install doc, this is not so clear, as only ORACLE_HOME directories (ex:/u01/app/11.2.0/grid/ for grid and /u01/app/oracle/acfsmounts/orahome1/ for oracle) have to be different, the ORACLE_BASE seeming to be a unique one.

- Even though you can set up a shared ORACLE_HOME through ACFS for the database binaries, you still have to rely on ocfs2 if you want to have the Oracle grid infrastructure binaries on a shared filesystem.

- You absolutely have to be patient ant wait for the root.sh script to finish on your first node (can last half an hour), before you may execute it on your other nodes. Else, your  installation will miserably fail.

A complete RAC installation guide for Oracle 11gR2 on RHEL 5.3 with multipath will follow soon.

mercredi 16 septembre 2009

Correspondance volumes ASM ORACLE , disks et device côté Linux:

/etc/init.d/oracleasm listdisks




sqlplus / as sysdba

SQL> show parameter asm_diskstring asm_disk

string string /dev/oracleasm/disks/*

SQL> select path from v$asm_disk;


ls -lsa /dev/oracleasm/disks total 0

  • 0 drwxr-xr-x 1 root root 0 avr 18 12:51 .
  • 0 drwxr-xr-x 1 root root 0 avr 18 12:51 ..
  • 0 brw-rw 1 oracle dba 8, 81 avr 18 12:51 VOL1
  • 0 brw-rw 1 oracle dba 8, 97 avr 18 12:51 VOL2

Voir le major/minor (ex:8,81) et les trouver dans /dev:

ls -lsa /dev |grep " 8,"

  • 0 brw-rw 1 root floppy 8, 80 Jun 24 2004 sdf

    0 brw-rw 1 root disk 8, 81 Jun 24 2004 sdf1 --> VOL1 est monté sur /dev/sdf1

  • 0 brw-rw 1 root disk 8, 90 Jun 24 2004 sdf10
Ou cat /proc/partitions -> voir le major/minor

mercredi 28 mai 2008

Using Pen load balancer as a port-forwarding proxy

Suppose you're a Paris-based firm that has several databases spread across different locations.

For example:
a) several Oracle databases listening on port 1521 on your own network, on addresses - 10,
b) one Oracle database listening on port 1521 in Turku, Finland linked by a VPN  tunnel, on address,
c) one oracle database listening on port 1521 in Toulouse linked by a leased line, on address,
d) one Oracle database listening on port 21521 on a public internet WAN port in Tanger, with just source-address filtering as security, on address,
e) plus about a dozen other databases in different parts of the world at your clients' sites.

You have a partner providing an extra service to your clients, and he has to connect in real time on all of your databases. He doesn't want to spend money on a network connection to each and everyone of your clients. He proposes to pay a leased line to your Paris site, and you will do the dispatching.
Of course, you don't want him to know too much about your network, so you will restrict his access to only one address, which will be a firewall at your end of the line between your two sites .

Let's suppose the outside interface address of your firewall is, and the inside address (the one on your network)

Providing  connectivity to the (a) databases on your local  network is pretty easy: you just have to give a port-forwarding rule and an access list to your router.

For example:
port 2001 on the outside interface ---> port 1521 on server1 at,
port 2002 on the outside interface ---> port 1521 on server2 at,
port 2003 on the outside interface ---> port 1521 on server3 at,
and so on...
Then, you just tell your partner to configure his tnsnames.ora to point at address and port 2001 for server1, address and port 2002 for server2, and so on...

However, forwarding the ports to the external (b), (c) and (d) databases is another affair.

Luckily, Pen is there for you. It was designed as load-balancing piece of software for server farms, but its features allow it to be used as a port-forwarding proxy, which is what we need in this case. It is available prepackaged for rpm- as well as deb-based Linux  distros, or as GPL'ed source code. You may learn more on its numerous features on its website: http://siag.nu/pen/

All you need is is a standard PC on your network, with a Linux Distro, let's say Debian, installed on it , as well as one (yes, only one) NIC.

Do an apt-get install pen (or an rpm-Uvh pen on an rpm-based distro).

Let's suppose you've given address to this computer.
It has to know the routes to reach the Turku, Toulouse, and Tanger based servers, and of course the route to reach your partner who wants to connect to them. It has also to be allowed to reach them on the ports on which they are listening (i.e 1521, 1521 and 21521 respectively).

All you need now is to write a little snippet of shell code in a file that you would call for example port-fwrd.sh:


# This is for the Turku-based database

# This is for the Toulouse-based database

# This is for the Tanger-based database


Make port-fwrd.sh executable by a chmod, and launch it: ./port-fwrd.sh
Have it start in your init scripts at rc3 level, so that it will get executed upon reboot of your machine.

All you have to do now on your firewall is to forward:
port 2011 on the outside interface ---> port 2011 on
port 2012 on the outside interface ---> port 2012 on
port 2013 on the outside interface ---> port 2013 on

and tell your partner to configure his tnsnames.ora to reach:
Turku on address and port 2011,
Toulouse on address and port 2012,
Tanger on address and port 2013.

Beautiful and simple, ain't it?

Note: of course, if the server on one of the locations doesn't support shared sockets (as it is the case with for example a Windows 2000 Server failsafe cluster), you won't be able to use portforwarding, since the answering port on the target server will a dynamic one, and thus unpredictable.

Happy computing.

Drop me a comment if this post has been useful to you, or if you see any reason for add-on or modification.