UNIX tricks and treats

Aller au contenu | Aller au menu | Aller à la recherche

mercredi 7 avril 2010

Multiplexer le fichier de contrôle Oracle sous ASM


1) Vérifier le nom du fichier de contrôle actuel:

SQL> show parameter control_files

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------------------------------------------
control_files                        string      +FRA/MYDB/controlfile/current.256.715620719


SQL> shutdown immediate

SQL> startup nomount


2) Effetuer la copie du fichier de contrôle via rman

$ rman target /

RMAN> restore controlfile to '+DATA' from '+FRA/MYDB/controlfile/current.256.715620719';

Starting restore at 07-APR-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=272 device type=DISK

channel ORA_DISK_1: copied control file copy
Finished restore at 07-APR-10


3) Retrouver le nom du nouveau fichier de contrôle via asmcmd:

ASMCMD> ls -lsa +DATA/MYDB/controlfile

Type         Redund  Striped  Time             Sys  Block_Size  Blocks    Bytes     Space  Name
CONTROLFILE  UNPROT  FINE     APR 07 14:00:00  Y         16384     595  9748480  16777216  none => current.283.715704921


4) Modifier les paramètres d'init pour prendre en compte le nouveau fichier de contrôle:

SQL> alter system set control_files='+DATA/MYDB/controlfiles/current.283.715704921','+FRA/MYDB/controlfile/current.256.715620719' scope=spfile;


5) Redémarrer la base et vérifier:

SQL> shutdown immediate

SQL> startup

SQL> show parameter control_files;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      +DATA/MYDB/controlfiles/cu
                                                 rrent.283.715704921, +FRA/mmtw
                                                 ebdv/controlfile/current.256.7
                                                 15620719



Laissez-moi une note si cet article vous a été utile.

Bonne journée.


Nixman

lundi 22 mars 2010

Purge backups OCR non fonctionnel sous Cluster Oracle 11gR2

Dans un cluster Oracle 11gR2, la purge des backups automatiques de l'OCR ne fonctionne pas (depuis version 10.2.4) si les  fichiers *.ocr situés sous $ORA_CRS_HOME/cdata/<cluster_name> (ex:/u01/app/11.2.0/grid/cdata/inbdor0809-rac/) ont les mauvaises permissions. De ce fait, les fichiers *.ocr avec des noms aléatoires ne sont pas renoimmés en backupXX.ocr et ne sont pas purgés, saturant la partition.

ocrconfig -showbackup montre bien la bonne date de sauvegarde, mais les fichiers backupXX.ocr ont une date bien antérieure.

Un grand nombre de fichiers XXXXXXXXX.ocr existe sous $ORA_CRS_HOME/cdata/<cluster_name>.


# ls -rtl $ORA_CRS_HOME/cdata/<cluster_name>

-rw------- 1 grid oinstall 7081984 jan 18 22:24 day.ocr
-rw------- 1 grid oinstall 7081984 jan 19 02:24 day_.ocr
-rw------- 1 grid oinstall 7081984 jan 19 06:24 backup02.ocr
-rw------- 1 grid oinstall 7081984 jan 19 10:24 backup01.ocr
-rw------- 1 grid oinstall 7081984 jan 19 14:24 backup00.ocr

...

-rw------- 1 root root 7413760 mar 19 22:06 40778839.ocr
-rw------- 1 root root 7413760 mar 20 02:06 26378630.ocr
-rw------- 1 root root 7413760 mar 20 06:06 11332652.ocr
-rw------- 1 root root 7413760 mar 20 10:06 35215677.ocr
-rw------- 1 root root 7413760 mar 20 14:06 41977816.ocr
-rw------- 1 root root 7413760 mar 20 18:06 18335174.ocr
-rw------- 1 root root 7413760 mar 20 22:06 90743999.ocr
-rw------- 1 root root 7413760 mar 21 02:06 20182690.ocr
-rw------- 1 root root 7413760 mar 21 06:06 28125568.ocr
-rw------- 1 root root 7413760 mar 21 10:06 20121708.ocr
-rw------- 1 root root 7413760 mar 21 14:06 34916120.ocr
-rw------- 1 root root 7413760 mar 21 18:06 24068304.ocr



Solution:

chown root:root $ORA_CRS_HOME/cdata/<cluster_name>/*.ocr

Les fichiers XXXXXXXX.ocr peuvent être ensuite supprimés.

Référence: Note Metalink 741271.1

mardi 17 novembre 2009

STREAMS alter table move bug


A very annoying STREAMS bug, which should be corrected in the 11GR2 release, is the failure of STREAMS to keep up with an alter table move tablespace command.

If your table contains a LOB, and want to do some reorg through a move, then you will very likely hit the bug and receive the following error message in your alert.log:

ORA-26744: STREAMS capture process "STREAMS_CAPTURE" does not support "OWNER"."TABLE_NAME" because of the following reason:
ORA-26773: Invalid data type for column "malformed redo"


No workarounds exist, except excluding your table from your STREAMS propagation.

Reimporting the table with a new flashback SCN won't work. You have to reimplement the whole STREAMS process to get back on your feet.

Metalink réference:Bug 5623403.

lundi 16 novembre 2009

Nombre de processeurs physiques sur serveur Red Hat Linux

Des outils comme top affichent le nombre de coeurs, ou le nombre de threads, et non le nombre de processurs physiques d'un serveur.


Afin d'obtenir le nombre de processurs physiques, il faut taper la commande suivante:

$ cat /proc/cpuinfo | grep "physical id"
physical id     : 0
physical id     : 2
physical id     : 0
physical id     : 2



Dans ce cas-ci, nous avons deux processurs physiques: 0 et 2

vendredi 9 octobre 2009

Configuration particulière de STREAMS pour RAC

Dans le cadre d'un RAC, si l'on a configuré la propagation via un dblink, il faut indiquer un des noeuds comme propriétaire par défaut de la queue capture et apply de STREAMS. Sinon, on peut se retrouver confroté à une erreur ORA-25315 de façon aléatoire.

Le mieux, sur des configurations 10.2 ou supérieurs, étant de positionner le paramètre queue_to_queue à TRUE lors de la mise en place de la propagation avec DBMS_PROPAGATION_ADM.CREATE_PROPAGATION. On ne passera alors plus par le dblink. Il est impossible de modifier le paramètre en cours de route.

Réf: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_prop_a.htm

Retrouver les noms des queues STREAMS

set lines 150

SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, t.OWNER_INSTANCE

  • FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
  • WHERE t.OBJECT_TYPE = 'SYS.ANYDATA'
  • AND q.QUEUE_TABLE = t.QUEUE_TABLE
  • AND q.OWNER = t.OWNER;

Indiquer un propriétaire par défaut et un propriétaire secondaire

DBMS_AQADM.ALTER_QUEUE_TABLE (

  • queue_table IN VARCHAR2,
  • comment IN VARCHAR2 DEFAULT NULL,
  • primary_instance IN BINARY_INTEGER DEFAULT NULL,
  • secondary_instance IN BINARY_INTEGER DEFAULT NULL);

ex:

BEGIN

  • DBMS_AQADM.ALTER_QUEUE_TABLE(
    • queue_table => 'MON_QUEUE_TABLE_APPLY',

    • primary_instance => 1,

    • secondary_instance => 2);

END; /

BEGIN

  • DBMS_AQADM.ALTER_QUEUE_TABLE(
    • queue_table => 'MON_QUEUE_TABLE_CAPTURE',

    • primary_instance => 1,

    • secondary_instance => 2);

END; /

BEGIN

  • DBMS_AQADM.ALTER_QUEUE_TABLE(
    • queue_table => 'SCHEDULER$_JOBQTAB',

    • primary_instance => 1,

    • secondary_instance => 2);

END; /

lundi 21 septembre 2009

Installing ocfs2 filesystem on RHEL 5.3


Until Oracle finally releases its much awaited-for Universal FileSystem, the only way to install grid infrastructure on shared storage is still ocfs2, which you may find useful as a regular cluster filesystem, too.

Download the rpms for Red Hat from
http://oss.oracle.com/projects/ocfs2/

For a 64-bit platform, you will need these ones:

( Do a uname -r to check which is your platform)

ocfs2-2.6.18-128.el5-1.4.2-1.el5.x86_64.rpm
ocfs2-tools-1.4.2-1.el5.x86_64.rpm
ocfs2console-1.4.2-1.el5.x86_64.rpm

# rpm -Uvh ocfs2-tools-1.4.2-1.el5.x86_64.rpm ocfs2-2.6.18-128.el5-1.4.2-1.el5.x86_64.rpm ocfs2console-1.4.2-1.el5.x86_64.rpm

You might have to install pygtk and vte first
# yum install vte.x86_64
# yum install pygtk2.x86_64

Contrarily to what the install doc states, you will first have to edit the /etc/ocfs2/cluster.conf by hand before being able to do anything.

cluster:
       node_count =1
       name=ocfs2
node:
        ip_port = 7777
        ip_address = my_cluster_node_1_interconnect_ip_adress
        number = 1
        name = my_cluster_node_1_hostname
        cluster = ocfs2


Once you've edited the file on one of the nodes, you're not done yet. Do a:

# service o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2: ocfs2
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: OK

Then only you may start the graphic ocfs2 console:

# ocfs2console

In the GUI, go to Edit-> Add node, and add your second node, with its interconnect ip address. Validate.

Go to Edit -> Propagate Configuration.

By now, you should see the following configuration on your two nodes.

node:
        ip_port = 7777
        ip_address = my_cluster_node_1_interconnect_ip_adress
        number = 1
        name = my_cluster_node_1_hostname
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = my_cluster_node_2_interconnect_ip_adress
        number = 2
        name = my_cluster_node_2_hostname
        cluster = ocfs2

cluster:
       node_count =3
       name=ocfs2



Do a:
# service o2cb configure
on the second node

Check if the service is finally up and running running on both nodes:

# ps -ef | grep o2
root     24816   153  0 17:27 ?        00:00:00 [o2net]
root     24891 18206  0 17:27 pts/0    00:00:00 grep o2

Then, you may go on formatting the volume you've prepared on your shared storage.

Here, the volume is configured under Linux with Device-Mapper multipath, and is seen under /dev/mapper as VOL1.

# mkfs.ocfs2 -c 4K -C 4K -L "ocfs2volume1" /dev/mapper/VOL1


Then, you may  just create a mount point on which to mount the volume on both nodes, /u01/app/ocfs2mounts/grid for example, if you're planning on installing Oracle grid infrastructure.

Mount the filesystem on both nodes

# mount /dev/mapper/VOL1 /u01/app/ocfs2mounts/grid

Drop me a line, or have a look at the links, if this post has been useful to you.

Happy computing

Nixman.






samedi 19 septembre 2009

Oracle 11gr2 RAC on Red Hat Linux 5.3 install guide part1


Oracle 11gr2 RAC on Red Hat Linux 5.3 install guide part1: Installing the grid infrastructure:

Oracle 11gR2 has been released for Linux, and the installation has somewhat changed from precedent versions, including 11gR1. In this step-by-step guide, we will lead you through a real-life Oracle Real Application Cluster (RAC) installation, its novelties, incompatibilities, and the caveats of Oracle install documentation.

The installation process is divided into two parts: The grid infrastructure installation, which now includes the clusterware, but also ASM installation, which has been moved there from the regular database installation. This stems from the fact that ASM now supports voting disks, and OCR files, and you are no longer required (actually, its now discouraged) to place the voting disks and OCR files on raw devices.

Grid infrastructure also installs for you the ACFS cluster file system, which allows you to share the ORACLE_HOME of your database installation between all the nodes of your RAC cluster. However, it doesn't allow you to share the grid infrastructure ORACLE_HOME between the nodes. For that, you would need to install the grid infrastructure binaries on an ocfs2 filesystem. However, that's not supported by Oracle, nor does it work. Last year, Oracle had promised an Oracle Universal File System (UFS), and it is a bit disappointing to see that ACFS is not what we expected yet.

Download the necessary files:

You will need the grid infrastructure disk, as well as the two database disks from Oracle's site. Download as well the deinstall disk, as Oracle Universal Installer doesn't support the deinstallation of the binaries anymore, and everything has moved to this 300Mb plus disk.

You will also need the three asmlib files from OTN, that are downloadable here.

Do a uname -rm on your platform in order to find out which ones are the right ones for you.

Oracle validate rpm will also be useful in order to ensure you have all the necessary rpm's installed on your server.

Setting the system parameters and prerequisites:

Nothing much new here. Simply follow the install guide's  instructions. The installer will check anyhow if you have set the parameters right, and even generate a fixup script for most of the prerequisites.

# cat >> /etc/sysctl.conf <<EOF
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmax = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
EOF

# sysctl -p

An extremely intriguing fact is the necessity to set up ntpd with the -x option which disallows any brutal adjustment of the system clock under a drift of 600s (instead of 128ms by default). This is a workaround for a RAC bug that was supposed to have been corrected in release 10.2.0.3... Well, actually, it may not be a bug, but a feature for any cluster that needs synchronization. The downside of the -x option is, that if your hardware is down for a month, and the system clock goes off half an hour, it will take days for it to adjust slowly to network time.

Be sure to do a chkconfig oracleasm on after setting up oracleasm on a RHEL 5.3. Else, you will corrupt your voting disks and OCR upon the first reboot. The install guide has simply forgotten to mention that oracleasm enable/disable have been deprecated on this platform.

Don't bother setting up VIP's or passwordless ssh connectivity, contrarily to what the install guide instructs you to do: the installer won't appreciate your initiative, and you will have to set them up the way Oracle wants it. Simply give the same password to your grid and oracle users on both nodes.

Creating the UNIX groups, users, and directories:

Create two separate users (grid and oracle for example), one for grid infrastructure installation, and one for the database installation, with separate ORACLE_BASE and ORACLE_HOME directories.

A new set of three groups have been created to manage asm. Grid user should be member of them.

A change to OFA is that the grid user's ORACLE_HOME cannot be under its ORACLE_BASE directory, but on a separate path.

Here, we will point oracle user's home to a shared ACFS mount. We'll mount that filesystem later, after grid infrastructure's installation  when we will have ACFS installed. Indeed, ACFS is built on top of ASM, which in turn is installed as part of grid infrastructure. Hence the separation of grid infrastructure and database installation.

(As a footnote: you may change u01 to get27, for example, and still be OFA-compliant)

# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd dba
# /usr/sbin/groupadd oper
# /usr/sbin/groupadd asmadmin
# /usr/sbin/groupadd asmdba
# /usr/sbin/groupadd asmoper
# /usr/sbin/groupadd orauser

# /usr/sbin/usermod -g oinstall -G dba,asmdba oracle
# /usr/sbin/usermod -g oinstall -G dba,asmdba,oper,oinstall,asmadmin grid

# mkdir /u01/app/11.2.0/grid
# mkdir /u01/app/grid
# mkdir /u01/app/acfsmounts/oracle
# chown -R grid:oinstall /u01/app
# chmod -R 775 /u01/app
# chown -R oracle:oinstall /u01/app/acfsmounts
# chmod -R 775 /u01/app/acfsmounts

# cat >> /etc/security/limits.conf <<EOF
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF

To be continued ... stay tuned.

vendredi 18 septembre 2009

Discrepancies and catchas in Oracle 11gR2 grid infrastructure install guide for Linux


Discrepancies in Oracle 11gR2 grid infrastructure install guide for Linux:

- Oracle instructs you to create VIP's on both nodes as a preinstall task.
However, if you do so, Oracle grid infrastructure installer will tell you the VIP adresses are already in use.

- Even though you have set up passwordless ssh connectivity between two RAC nodes, the installer keeps telling you this is not the case. I guess it has something to do with Oracle using rsa1. I gave up and gave both my oracle users the same password, and clicked on "setup", and let the installer do it for me. Everything went fine afterwards.

- /usr/sbin/oracleasm enable and disable have been deprecated on RHEL 5.3.
You have to use chckconfig oracleasm on.
If you fail to do so, upon reboot, asmlib is not loaded, and your voting disks and OCR are corrupted.

- If you use ACFS have to use different ORACLE_BASE directories for the Oracle grid infrastructure user (ex:grid: /u01/app/grid/), and the Oracle database user (ex: oracle: /u01/app/oracle/).
In the install doc, this is not so clear, as only ORACLE_HOME directories (ex:/u01/app/11.2.0/grid/ for grid and /u01/app/oracle/acfsmounts/orahome1/ for oracle) have to be different, the ORACLE_BASE seeming to be a unique one.

- Even though you can set up a shared ORACLE_HOME through ACFS for the database binaries, you still have to rely on ocfs2 if you want to have the Oracle grid infrastructure binaries on a shared filesystem.

- You absolutely have to be patient ant wait for the root.sh script to finish on your first node (can last half an hour), before you may execute it on your other nodes. Else, your  installation will miserably fail.

A complete RAC installation guide for Oracle 11gR2 on RHEL 5.3 with multipath will follow soon.

mercredi 16 septembre 2009

Enabling server-side failover, TAF and load-balancing on Oracle 1OgR2 RAC


Sometimes, you don't  have the possiblity enable Transparent application failover on the client side (in the tnsnames.ora file for example).

That's where this new feature in Oracle 10gR2 RAC comes handy:

You can enable both failover and load-balancing on the server side, by executing a simple dbms_service procedure.

EXECUTE DBMS_SERVICE.MODIFY_SERVICE (service_name => 'MY_SERVICE_NAME'
, aq_ha_notifications => TRUE
, failover_method => DBMS_SERVICE.FAILOVER_METHOD_BASIC
, failover_type => DBMS_SERVICE.FAILOVER_TYPE_SELECT
, failover_retries => 60
, failover_delay => 10
, clb_goal => DBMS_SERVICE.CLB_GOAL_LONG);


To disable the feature, it's as simple:

begin
    dbms_service.modify_service(
      service_name=>'MY_SERVICE_NAME',
      failover_type=>DBMS_SERVICE.FAILOVER_TYPE_NONE,
      failover_method=>DBMS_SERVICE.FAILOVER_METHOD_NONE
    );
  end;
/


For complete documentation, you can check:

http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/hafeats.htm#BABIAICG



Happy computing,

Nixman

Purger les jobs Datapump terminés ou orphelins

Réf: Metalink Doc ID: 336014.1

SQL > SET lines 200

COL owner_name FORMAT a10

COL job_name FORMAT a20

COL state FORMAT a11

COL operation LIKE state

COL job_mode LIKE state

SQL> SELECT owner_name, job_name, operation, job_mode, state, attached_sessions FROM dba_datapump_jobs WHERE job_name NOT LIKE 'BIN$%' order by 1,2;

EXPIMP SYS_EXPORT_TABLE_02 EXPORT TABLE NOT RUNNING 0
EXPIMP SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0

Ne sélectionner que ceux qui sont en "NOT RUNNING". Voir où se trouvent leurs Master tables:

SQL> SELECT o.status, o.object_id, o.object_type,
o.owner||'.'||object_name "OWNER.OBJECT"
FROM dba_objects o, dba_datapump_jobs j
WHERE o.owner=j.owner_name AND o.object_name=j.job_name
AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;

VALID 85215 TABLE EXPIMP.SYS_EXPORT_TABLE_02
VALID 85162 TABLE EXPIMP.SYS_EXPORT_FULL_01

Dropper les Master tables concernés:

SQL> DROP TABLE EXPIMP.sys_export_table_02;

SQL> DROP TABLE EXPIMP.sys_export_full_01;

Killer un job datapump non interactif proprement

Réf: Metalink Doc ID: 336014.1

SQL> SET lines 200

COL owner_name FORMAT a10

COL job_name FORMAT a20

COL state FORMAT a11

COL operation LIKE state

COL job_mode LIKE state

SQL> SELECT owner_name, job_name, operation, job_mode, state, attached_sessions FROM dba_datapump_jobs WHERE job_name NOT LIKE 'BIN$%';

EXPIMP SYS_EXPORT_FULL_05 EXPORT FULL RUNNING 0

SQL> connect expimp/expimp

SQL> DECLARE h1 number;

  • BEGIN
  • h1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_FULL_05','EXPIMP');
  • DBMS_DATAPUMP.STOP_JOB (h1);
  • END;

/

La session passe en "STOP PENDING" pendant un certain temps, puis en "NOT RUNNING"

Correspondance volumes ASM ORACLE , disks et device côté Linux:

/etc/init.d/oracleasm listdisks

VOL1

VOL2

export $ORACLE_SID=+ASM

sqlplus / as sysdba

SQL> show parameter asm_diskstring asm_disk

string string /dev/oracleasm/disks/*

SQL> select path from v$asm_disk;

/dev/oracleasm/disks/VOL1

ls -lsa /dev/oracleasm/disks total 0

  • 0 drwxr-xr-x 1 root root 0 avr 18 12:51 .
  • 0 drwxr-xr-x 1 root root 0 avr 18 12:51 ..
  • 0 brw-rw 1 oracle dba 8, 81 avr 18 12:51 VOL1
  • 0 brw-rw 1 oracle dba 8, 97 avr 18 12:51 VOL2

Voir le major/minor (ex:8,81) et les trouver dans /dev:

ls -lsa /dev |grep " 8,"

  • 0 brw-rw 1 root floppy 8, 80 Jun 24 2004 sdf

    0 brw-rw 1 root disk 8, 81 Jun 24 2004 sdf1 --> VOL1 est monté sur /dev/sdf1

  • 0 brw-rw 1 root disk 8, 90 Jun 24 2004 sdf10
Ou cat /proc/partitions -> voir le major/minor

mardi 24 février 2009

Démarrage verbeux de Solaris 10

Note: The english translation of this note can be found here

Par défaut, le démarrage de Solaris 10 se fait en mode silencieux. En d'autres termes, vous ne voyez plus le log de démarrage sur la console.


Ce comportement, issu de l'utilisation de svcadm, peut etre aisément modifié avec l'aide de la commande svccfg.

Mise en place du démarrage verbeux sous Solaris 10:

  # /usr/sbin/svccfg -s system/svc/restarter:default
  svc:/system/svc/restarter:default> addpg options application
  svc:/system/svc/restarter:default> setprop options/logging = astring: verbose
  svc:/system/svc/restarter:default> listprop
...
options/logging            astring  verbose
...

  svc:/system/svc/restarter:default> exit

Retour au démarrage silencieux sous Solaris 10:

  # svccfg -s system/svc/restarter:default
  svc:/system/svc/restarter:default> delpg options
  svc:/system/svc/restarter:default> listprop
  svc:/system/svc/restarter:default> exit


Laissez-moi une note si cet article vous a été utile.

Bonne journée.


Nixman

vendredi 30 janvier 2009

OS and Browser Statistics

DECEMBER 2008:


Operating systems:

Windows


745
93%
Linux


42
5.2%
Macintosh


12
1.5%
SunOS


2
0.2%


Browsers:

Explorer


302

40%
Firefox


263

34.8%
Explorer x.x


137

18.1%
Opera


21

2.8%
Safari


15

2%
Explorer 5.x


8

1.1%
Other Mozilla


6

0.8%
Explorer 4.x


2

0.3%
Konqueror


1

0.1%

lundi 12 janvier 2009

Turning on verbose boot logging for Solaris 10

Note: la traduction française de cet article se trouve ici.

By default, the Solaris 10 boot is "quiet" on the console.

This behaviour, stemming from the usage of svcadm, can be found annoying by experienced Solaris sysadmins, used to previous versions of the OS.

This behaviour can be changed with svccfg.

Turn on verbose boot logging for Solaris 10

  # /usr/sbin/svccfg -s system/svc/restarter:default
  svc:/system/svc/restarter:default> addpg options application
  svc:/system/svc/restarter:default> setprop options/logging = astring: verbose
  svc:/system/svc/restarter:default> listprop
...
options/logging            astring  verbose
...

  svc:/system/svc/restarter:default> exit


Remove verbose Solaris boot logging

  # svccfg -s system/svc/restarter:default
  svc:/system/svc/restarter:default> delpg options
  svc:/system/svc/restarter:default> listprop
  svc:/system/svc/restarter:default> exit

Happy computing.

Drop me a comment if this post has been useful to you, or if you see any reason for add-on or modification.

Alternatively, you could also visit a few links to keep me in business ;-)

Nixman

mercredi 10 septembre 2008

Purging a sendmail mailqueue on AIX

Tested on: IBM AIX 5.2

Sendmail processes may run wild, due to huge process loads, or even badly configured applications sending automatized mails.

When sendmail processes are overloaded, they may clog up the mailqueue and spawn multiple sendmail processes to treat the mailqueue, ultimately consuming most of your server's swap area, degrading performance, or even prevent other applications from running.

Here are the steps needed to stop rogue sendmail processes, and cleanly purge the sendmail mailqueue on IBM AIX 5.2. The process is similar on other UNIXes, except for the sendmail stop and start commands, which vary, depending of your OS. On Solaris, for example, you would use your own stop and start scripts in /etc/rcX.d/ or in /etc/init.d/.

First, find and kill the multiple sendmail processes if they have gone havoc.

# ps -ef | grep sendmail
 
# kill -9 SENDMAIL_PIDS

Then, stop sendmail cleanly (the commands depend of your OS. This one works only on IBM AIX).

# stopsrc -s sendmail  

You may check the number of messages that are in the queue, which will give you an idea of the time it will take to process the queue:

# sendmail -bp 

Check that there are no longer any sendmail processes running:

# ps -ef | grep sendmail
 
# kill -9 SENDMAIL_PIDS

Rename the current mailqueue to another directory:

# mv /var/spool/mqueue /var/spool/omqueue 

Restart sendmail

# startsrc -s sendmail
0513-059 The sendmail Subsystem has been started. Subsystem PID is 62118
 

Now process the old queue (may take time, depending upon the number of messages to process):

# /usr/sbin/sendmail -oQ/var/spool/omqueue -q -v

Running /var/spool/omqueue/m7HKkOM60666 (sequence XXXX of XXXXX)
Running /var/spool/omqueue/m7HKkOM60666 (sequence XXXX+1 of XXXXX)...
etc... 

Now, you may safely delete all messages in the old queue:

# rm -rf /var/spool/omqueue

Create a new mailqueue directory.

# mkdir /var/spool/mqueue

Stop and start sendmail:

# stopsrc -s sendmail

# startsrc -s sendmail

You're done!

Happy computing.

Drop me a comment if this post has been useful to you, or if you see any reason for add-on or modification.

Nixman

jeudi 14 août 2008

HR ACCESS user logins and passwords


HR ACCESS stores the user passwords without encryption in the UC10 table. As an Oracle DBA, if you have access to the database instance, all you have to do is issue the following command through SQL*PLUS:

 SQL> select cdutil, cdpass from UC10;

CDUTIL   CDPASS
-------- --------   
USER1     PASWORD1
USER2   PASWORD2
USER3   PASWORD3
USER4   PASWORD4
USER5  PASWORD5

Happy computing.

Drop me a comment if this post has been useful to you, or if you see any reason for add-on or modification.

Nixman

mercredi 13 août 2008

Activer le Net Management de l'ALOM sur serveurs SunFire


Sur les serveur Sunfire disposant d'une interface ALOM en NetMGMT (SunFire V210, V240, T1000, T2000...), il est possible d'accéder au LOM au travers du réseau, via un port RJ45 spécifique, exactement comme via un port série. Il suffit d'activer la prise en charge réseau via les commandes suivantes du LOM:

setsc if_network true
setsc netsc_ipaddr ADRESSE_IP
setsc netsc_ipnetmask NETMASK
setsc netsc_ipgateway ADRESSE_PASSERELLE
resetsc

Ou bien vous pouvez utiliser la commande interactive setupsc.

Note: Il semble qu'il soit impossible de déterminer un mask différent de 255.255.255.0.

Note 2: Pour des questions de sécurité, il est bien entendu préférable d'accéder à l'ALOM via un sous-réseau spécifique.

Ensuite, il suffit de telneter l'adresse ADRESSE_IP précédemment renseignée.

Pour basculer entre le mode terminal et le mode LOM, il faut utiliser les commandes suivantes:

#.               --> bascule en mode LOM

console -f   --> bascule en mode terminal en rw en déconnectant les autres sessions.


Laissez-moi un commentaire si cet article vous a été utile.

Bonne journée.

Nixman

Ajout d'une table de transcodage non standard à CFT


CFT est un outil de transfert sécurisé multi-plateforme issu des gros systèmes. Il est utilisé essentiellement en France pour le transfert sécurisé de fichiers bancaires. Il a été crée par la société AXWAY et est maintenu par SOPRA.

Lors d'un transfert entre UNIX et MVS par exemple, un transcodage ASCII (UNIX) vers EBCDIC (MVS) est nécessaire.

Or, CFT ne maintient en interne que les quatre tables de transcodage ISO-646 (US7ASCII) vers EBCDIC et inversement.

Dans le cas d'un transfert de fichier contenant des caractères hors ASCII 128 bits (ISO-646, US7ASCII), il faut faire appel à des tables de transcodage externes entre les différents partenaires.

Ici, nous allons ajouter une table de transcodage non standard à un émetteur CFT UNIX se connectant à un récepteur MVS.


1)    Avant toute chose, s’assurer d’une sauvegarde du fichier de configuration existant !

su – cft
cd $CFT_HOME/config
cp config.txt config.txt-YYYY-MM-YY


2)    Obtenir le nouveau fichier de transcodage et le copier sur le serveur

Ex : AtoemV2.dat

le copier sous $CFT_HOME/config/


3)    Arrêter CFT

cftstop


4)    Mettre à jour le fichier de configuration de CFT :

a)    Ajouter une rubrique pour le nouveau fichier de transcodage :

/*------------------------------------------------------------------------------*/
/* Table de transcodage non standard pour partenaire MVS                  */
/*------------------------------------------------------------------------------*/
cftxlate        id      = ATOEV2,
                direct  = SEND,                                             
                fcode   = ASCII,                                            
                ncode   = EBCDIC,                                           
                fname   = $CFT_HOME/config/AtoemV2.dat

b)    Indiquer quel flux doit utiliser cette table, au lieu des tables DEFAULT

Ajouter une ligne xlate = $CFTXLATE_ID (ici :ATOEV2) aux flux devant l’utiliser en envoi (en règle générale, seul l’émetteur a la charge du transcodage).

Ex :

/*------------------------------------------------------------------------------*/
/* Flux exemple en émission (send) vers partenaire  MVS  */
/*------------------------------------------------------------------------------*/
cftsend id      = UNIX2MVS,
        ftype   = T,
        frecfm  = V,
        flrecl  = 21000,
        fcode   = ascii,
        ncode   = ebcdic,
        xlate   = ATOEV2,
        parm    = 'ABCD1234',
        faction = none,
        mode    = replace,
        fname   = $CFT_HOME/emet/testunix2mvs.txt


5)    Réinitialiser et redémarrer CFT

cd $CFT_HOME/config

cftinit config.txt

Vérifier qu’il n’y a pas de rejets…

cftstart


Laissez-moi un commentaire si cet article vous a été utile.

Bonne journée

Nixman

mardi 12 août 2008

Installation et configuration de Proftpd sous AIX 5.2


Fonctionne sous: IBM AIX 5.2

Le serveur ftp historique d'IBM AIX étant relativement limité dans ses capacités de configuration, notamment dans l'utilisation d'environnements chrootés, il est parfois utile d'installer un daemon ftpd alternatif.

1 ) Télécharger coreutils et proftpd depuis le site d’IBM :

http://www-03.ibm.com/systems/p/os/aix/linux/toolbox/download.html


2 ) Installer coreutils-5.2.1-2.aix5.1.ppc.rpm et  proftpd-1.2.8-1.aix5.1.ppc.rpm :

rpm –Uvh coreutils-5.2.1-2.aix5.1.ppc.rpm
rpm –Uvh proftpd-1.2.8-1.aix5.1.ppc.rpm


3 ) Modifier /etc/proftpd.conf

####################################################
# This is a basic ProFTPD configuration file (rename it to
# 'proftpd.conf' for actual use.  It establishes a single server
# and a single anonymous login.  It assumes that you have a user/group
# "nobody" and "ftp" for normal operation and anon.

# Modif Nixman: on ne veut pas que le serveur affiche la version de proftpd
# On remplace ServerName par ServerIdent
ServerIdent     on      "Serveur FTP NIXBLOG.ORG"
# ServerName                    "Serveur FTP NIXBLOG.ORG"
# Modif Nixman:  Mettre ServerType a inetd au lieu de StandAlone
ServerType                      inetd
DefaultServer                   on

# Port 21 is the standard FTP port.
Port                            21

# Umask 022 is a good standard umask to prevent new dirs and files
# from being group and world writable.
Umask                           022

# To prevent DoS attacks, set the maximum number of child processes
# to 30.  If you need to allow more than 30 concurrent connections
# at once, simply increase this value.  Note that this ONLY works
# in standalone mode, in inetd mode you should use an inetd server
# that allows you to limit maximum number of processes per service
# (such as xinetd).
MaxInstances                    30

# Set the user and group under which the server will run.
# Modif Nixman: Mettre Group a nobody, car le groupe par defaut n'existe pas
# sous AIX
User                            nobody
Group                           nobody

# To cause every FTP user to be "jailed" (chrooted) into their home
# directory, uncomment this line.
# Modif Nixman: Tous les utilisateurs du groupe ftpjail sont chrootes
DefaultRoot ~ ftpjail

# Normally, we want files to be overwriteable.
<Directory />
  AllowOverwrite                on
</Directory>

# Ajout Nixman: on veut un log des transfert
Transferlog     /var/adm/xferlog.proftpd

## A basic anonymous configuration, no upload directories.  If you do not
## want anonymous users, simply delete this entire <Anonymous> section.
## Modif Nixman: On ne veut pas de compte ftp anonyme, donc on commente tout
## le paragraphe
#<Anonymous ~ftp>
#  User                         ftp
#  Group                                ftp
#
#  # We want clients to be able to login with "anonymous" as well as "ftp"
#  UserAlias                    anonymous ftp
#
#  # Limit the maximum number of anonymous logins
#  MaxClients                   10
#
#  # We want 'welcome.msg' displayed at login, and '.message' displayed
#  # in each newly chdired directory.
#  DisplayLogin                 welcome.msg
#  DisplayFirstChdir            .message
#
#  # Limit WRITE everywhere in the anonymous chroot
#  <Limit WRITE>
#    DenyAll
#  </Limit>
#</Anonymous>
####################################################

4) modifier /etc/inetd.conf

#ftp     stream  tcp6    nowait  root    /usr/sbin/ftpd         ftpd
ftp     stream  tcp    nowait  root    /usr/sbin/proftpd         proftpd


5) Relancer inetd:

refresh –s inetd


6) Modifier ftpusers:

Enlever les utilisateurs qui ont droit de se connecter dans /etc/ftpusers, si le fichier a été créé à l’installation.


7) Créer le groupe ftpjail et y ajouter les utilisateurs à chrooter:

mkgroup ftpjail
vi /etc/group et y ajouter les utilisateurs qui doivent être chrootées.


8) Retour en arrière possible:

Il suffit de remettre /etc/inetd.conf à l’état d’origine :

ftp     stream  tcp6    nowait  root    /usr/sbin/ftpd         ftpd
#ftp     stream  tcp    nowait  root    /usr/sbin/proftpd         proftpd

Ensuite, refresh –s inetd.

Pour désinstaller coreutils et proftpd:
rpm –e proftpd-1.2.8-1
rpm –e coreutils-5.2.1-2


Laissez-moi un commentaire si cet article vous a été utile.

Vous pouvez également suivre quelques liens pour m'assurer un peu de revenu ;-).


Nixman

- page 2 de 3 -