UNIX tricks and treats

Aller au contenu | Aller au menu | Aller à la recherche

mardi 26 février 2013

Sailfish SDK is out!

The day has finally arrived!

The much-awaited Sailfish SDK is out in alpha release at https://sailfishos.org/

Currently available for Linux 32 and 64 bit only, but ports to other popular OS's will follow.

Happy computing

Nixman

mercredi 19 décembre 2012

Trying out Sailfish Mer SDK

For all of those too impatient to wait for a full Sailfish OS binary SDK release (due for Q1 2013), the Mer project has released a good documentation on how to build your own environment under VirtualBox.

And it's here:

https://wiki.merproject.org/wiki/Platform_SDK_on_VirtualBox

You will need a Linux box and some extra RAM, and you're ready to write some Qt apps for Jolla.

Nixman

dimanche 25 novembre 2012

New Sailfish demo videos out from Jolla

A few new Sailfish demos from Jolla are out

First the Sailfish SDK demo from Slush:

http://www.youtube.com/watch?feature=player_embedded&v=ZrwYyN-vNVo

Very fast development indeed with this new Qt SDK. Some whiners will of course complain about the lack of an efl SDK...

The second one is in finnish only from Jolla's Senior designer:

http://www.youtube.com/watch?feature=player_embedded&v=NtEbOGuxuig

There are some interesting comments from Jaakko Roppola about the philosophy of the Sailfish UI, that I've tried to translate as accurately possible:

"The UI has been built around simplicity and ease of use"

"No need to into the application and back off. You can just slide the application icon on the side to execute commands"

"What are the limits? - Basically the physical size (of the screen), and in some cases there is no need for interaction. For example the People application doesn't have any interaction"

"What about the multitasking and its limits? - There's a human limit. We don't want to make a second page with icons. When you have a certain number of applications open, the least used one will not be visible. It will still be running, but won't be visible on the home screen" (From what I've heard, it should be 9 application maximum on the home screen currently)

"You can close the application like this (by pressing them a few seconds and tapping the cross)"

"Is there a context menu? - Yes, and we can demonstrate our Ambiance application at the same time. You get the context menu for the application by swiping from top to bottom from anywhere on the screen. When I release it, it executes the function it has been asigned"

"There have been many questions about the Ambiance: is it not just some eye candy? No, it was just one of the targets we wanted to show. You have to think much further than adapting the colours of the phone to some picture you just find in the gallery."

"There is one thing we have been quite clear about when we've been discussing the UI between ourselves: we don't want any buttons everywhere on the screen. Rather, the navigation is by gestures only."

"For example, here , I can go digging very far in the hierarchy, and just swipe my way back to top. But I still see the hierarchy and just by one gesture I can go back to my home screen to check whether I have network, how much battery I've got left..."

"What about notifications? - If you're not using the phone, the notification will be on the home screeen, but if you're using an app, the notification for a call for example will pop up ,but it won't be persistent in the application window. It will be directly on the screen, and you won't have to search for an SMS somewhere in another application. Basically, everything must be very close anywhere, but everything must NOT be everywhere."


Here's a shorter demo in english by Jaakko:

http://www.youtube.com/watch?v=_c_BqnR_vAM

And a second short demo of multitasking in english:

http://www.youtube.com/watch?v=KHn3qp_E3_A&feature=related


And a third one with the quick glance feature:

http://www.youtube.com/watch?v=bLKN7QdGzWU&feature=related

The bottom line is: cool user-friendly interface that you can use single-handedly blindfolded. THAT seems to be the idea behind Sailfish OS UI. Ans it's truly open, not like the  Android ecosystem where Google's basically bullying everyone around.

Nixman

jeudi 22 novembre 2012

Jolla's Sailfish OS is out!

Jolla's Sailfish OS was announced yesterday at the nordic Slush startup event.


As the organizers of the event humorously pointed out, Finland in November certainly isn't California, but the startups have never been better. 

Simultaneously, the jolla.com and sailfishos.org websites were finally opened to the public.
Until now, Jolla had been quite secretive about the project, having only a twitter and a facebook account, plus some haphazard interviews with executives like Jussi Hurmola.
Seeing the superb result, a true multitask OS that can be operated single-handedly (no politically incorrect pun at Marc Dillon intended) with just a few swiping gestures, I can understand why these guys were busyier coding and making their product work, rather than putting up communication hype.
You could sense the tension, emotion of the moment, and a bit of a lack of polished PR preparation in the keynote.
That's all right guys, you really rock, far away from usual hype of contentless startups!
Not a single single technical glitch during the presentation, except for the sound guys at slush ;-). The demos just worked fluidly.

Several new partnerships were announced, in addition to the existing chinese D-Phone deal, amongst which:
DNA: A finnish mobile operator,
Myriad: for the Android compatibility layer,
ST-Ericsson: that will ensure OS compatibility with Sony-Ericsson smartphone hardware like the powerful NovaThor system-on-chips.

etc...

The video for the keynote is here:
http://www.youtube.com/watch?v=bdLUJZR078k&feature=plcp

A shorter hands-on preview here:
http://www.youtube.com/watch?v=_c_BqnR_vAM&feature=plcp&list=PLQgR2jhO_J0y6zifH8KkevJoEYM9LOtkM

And a coverage of the Jolal part of the meeting, plus interview here:
http://bergie.iki.fi/blog/jolla-sailfish/

Basically, my next smartphone will be running Sailfish. Provided Angry Birds Star Wars and Bad Piggies will be available ;-).

Nixman

mercredi 25 août 2010

A handy command to monitor Linux multipath

Works on: Red Hat 5.3 with Qlogic fiber channel cards

Monitoring failing paths on a fibre channel card connected to a SAN on Linux isn't very straightforward

A handy command to check it in real time would be this one:

watch -n 1 "echo show paths | multipathd -k "

The output would look something like this:

multipathd> hcil    dev  dev_t  pri dm_st   chk_st   next_check

[...]
1:0:3:3 sdam 66:96  50  [failed][faulty] XX........ 4/20
1:0:3:4 sdan 66:112 50  [failed][faulty] XX........ 4/20
0:0:0:0 sda  8:0    50  [active][ready]  XXXXXXXX.. 17/20
0:0:0:1 sdb  8:16   10  [active][ready]  XXXXXXXX.. 17/20
0:0:0:2 sdc  8:32   50  [active][ready]  XXXXXXXX.. 17/20

[...]

Here, controller 1 is failing, resulting in 4 failed paths out of 8.

"4/20" and "17/20" being the number of secons left till the next check

Leave me a note if this post has been useful to you

Happy computing

Nixman

lundi 16 novembre 2009

Nombre de processeurs physiques sur serveur Red Hat Linux

Des outils comme top affichent le nombre de coeurs, ou le nombre de threads, et non le nombre de processurs physiques d'un serveur.


Afin d'obtenir le nombre de processurs physiques, il faut taper la commande suivante:

$ cat /proc/cpuinfo | grep "physical id"
physical id     : 0
physical id     : 2
physical id     : 0
physical id     : 2



Dans ce cas-ci, nous avons deux processurs physiques: 0 et 2

lundi 21 septembre 2009

Installing ocfs2 filesystem on RHEL 5.3


Until Oracle finally releases its much awaited-for Universal FileSystem, the only way to install grid infrastructure on shared storage is still ocfs2, which you may find useful as a regular cluster filesystem, too.

Download the rpms for Red Hat from
http://oss.oracle.com/projects/ocfs2/

For a 64-bit platform, you will need these ones:

( Do a uname -r to check which is your platform)

ocfs2-2.6.18-128.el5-1.4.2-1.el5.x86_64.rpm
ocfs2-tools-1.4.2-1.el5.x86_64.rpm
ocfs2console-1.4.2-1.el5.x86_64.rpm

# rpm -Uvh ocfs2-tools-1.4.2-1.el5.x86_64.rpm ocfs2-2.6.18-128.el5-1.4.2-1.el5.x86_64.rpm ocfs2console-1.4.2-1.el5.x86_64.rpm

You might have to install pygtk and vte first
# yum install vte.x86_64
# yum install pygtk2.x86_64

Contrarily to what the install doc states, you will first have to edit the /etc/ocfs2/cluster.conf by hand before being able to do anything.

cluster:
       node_count =1
       name=ocfs2
node:
        ip_port = 7777
        ip_address = my_cluster_node_1_interconnect_ip_adress
        number = 1
        name = my_cluster_node_1_hostname
        cluster = ocfs2


Once you've edited the file on one of the nodes, you're not done yet. Do a:

# service o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2: ocfs2
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: OK

Then only you may start the graphic ocfs2 console:

# ocfs2console

In the GUI, go to Edit-> Add node, and add your second node, with its interconnect ip address. Validate.

Go to Edit -> Propagate Configuration.

By now, you should see the following configuration on your two nodes.

node:
        ip_port = 7777
        ip_address = my_cluster_node_1_interconnect_ip_adress
        number = 1
        name = my_cluster_node_1_hostname
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = my_cluster_node_2_interconnect_ip_adress
        number = 2
        name = my_cluster_node_2_hostname
        cluster = ocfs2

cluster:
       node_count =3
       name=ocfs2



Do a:
# service o2cb configure
on the second node

Check if the service is finally up and running running on both nodes:

# ps -ef | grep o2
root     24816   153  0 17:27 ?        00:00:00 [o2net]
root     24891 18206  0 17:27 pts/0    00:00:00 grep o2

Then, you may go on formatting the volume you've prepared on your shared storage.

Here, the volume is configured under Linux with Device-Mapper multipath, and is seen under /dev/mapper as VOL1.

# mkfs.ocfs2 -c 4K -C 4K -L "ocfs2volume1" /dev/mapper/VOL1


Then, you may  just create a mount point on which to mount the volume on both nodes, /u01/app/ocfs2mounts/grid for example, if you're planning on installing Oracle grid infrastructure.

Mount the filesystem on both nodes

# mount /dev/mapper/VOL1 /u01/app/ocfs2mounts/grid

Drop me a line, or have a look at the links, if this post has been useful to you.

Happy computing

Nixman.






samedi 19 septembre 2009

Oracle 11gr2 RAC on Red Hat Linux 5.3 install guide part1


Oracle 11gr2 RAC on Red Hat Linux 5.3 install guide part1: Installing the grid infrastructure:

Oracle 11gR2 has been released for Linux, and the installation has somewhat changed from precedent versions, including 11gR1. In this step-by-step guide, we will lead you through a real-life Oracle Real Application Cluster (RAC) installation, its novelties, incompatibilities, and the caveats of Oracle install documentation.

The installation process is divided into two parts: The grid infrastructure installation, which now includes the clusterware, but also ASM installation, which has been moved there from the regular database installation. This stems from the fact that ASM now supports voting disks, and OCR files, and you are no longer required (actually, its now discouraged) to place the voting disks and OCR files on raw devices.

Grid infrastructure also installs for you the ACFS cluster file system, which allows you to share the ORACLE_HOME of your database installation between all the nodes of your RAC cluster. However, it doesn't allow you to share the grid infrastructure ORACLE_HOME between the nodes. For that, you would need to install the grid infrastructure binaries on an ocfs2 filesystem. However, that's not supported by Oracle, nor does it work. Last year, Oracle had promised an Oracle Universal File System (UFS), and it is a bit disappointing to see that ACFS is not what we expected yet.

Download the necessary files:

You will need the grid infrastructure disk, as well as the two database disks from Oracle's site. Download as well the deinstall disk, as Oracle Universal Installer doesn't support the deinstallation of the binaries anymore, and everything has moved to this 300Mb plus disk.

You will also need the three asmlib files from OTN, that are downloadable here.

Do a uname -rm on your platform in order to find out which ones are the right ones for you.

Oracle validate rpm will also be useful in order to ensure you have all the necessary rpm's installed on your server.

Setting the system parameters and prerequisites:

Nothing much new here. Simply follow the install guide's  instructions. The installer will check anyhow if you have set the parameters right, and even generate a fixup script for most of the prerequisites.

# cat >> /etc/sysctl.conf <<EOF
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmax = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
EOF

# sysctl -p

An extremely intriguing fact is the necessity to set up ntpd with the -x option which disallows any brutal adjustment of the system clock under a drift of 600s (instead of 128ms by default). This is a workaround for a RAC bug that was supposed to have been corrected in release 10.2.0.3... Well, actually, it may not be a bug, but a feature for any cluster that needs synchronization. The downside of the -x option is, that if your hardware is down for a month, and the system clock goes off half an hour, it will take days for it to adjust slowly to network time.

Be sure to do a chkconfig oracleasm on after setting up oracleasm on a RHEL 5.3. Else, you will corrupt your voting disks and OCR upon the first reboot. The install guide has simply forgotten to mention that oracleasm enable/disable have been deprecated on this platform.

Don't bother setting up VIP's or passwordless ssh connectivity, contrarily to what the install guide instructs you to do: the installer won't appreciate your initiative, and you will have to set them up the way Oracle wants it. Simply give the same password to your grid and oracle users on both nodes.

Creating the UNIX groups, users, and directories:

Create two separate users (grid and oracle for example), one for grid infrastructure installation, and one for the database installation, with separate ORACLE_BASE and ORACLE_HOME directories.

A new set of three groups have been created to manage asm. Grid user should be member of them.

A change to OFA is that the grid user's ORACLE_HOME cannot be under its ORACLE_BASE directory, but on a separate path.

Here, we will point oracle user's home to a shared ACFS mount. We'll mount that filesystem later, after grid infrastructure's installation  when we will have ACFS installed. Indeed, ACFS is built on top of ASM, which in turn is installed as part of grid infrastructure. Hence the separation of grid infrastructure and database installation.

(As a footnote: you may change u01 to get27, for example, and still be OFA-compliant)

# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd dba
# /usr/sbin/groupadd oper
# /usr/sbin/groupadd asmadmin
# /usr/sbin/groupadd asmdba
# /usr/sbin/groupadd asmoper
# /usr/sbin/groupadd orauser

# /usr/sbin/usermod -g oinstall -G dba,asmdba oracle
# /usr/sbin/usermod -g oinstall -G dba,asmdba,oper,oinstall,asmadmin grid

# mkdir /u01/app/11.2.0/grid
# mkdir /u01/app/grid
# mkdir /u01/app/acfsmounts/oracle
# chown -R grid:oinstall /u01/app
# chmod -R 775 /u01/app
# chown -R oracle:oinstall /u01/app/acfsmounts
# chmod -R 775 /u01/app/acfsmounts

# cat >> /etc/security/limits.conf <<EOF
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF

To be continued ... stay tuned.

vendredi 18 septembre 2009

Discrepancies and catchas in Oracle 11gR2 grid infrastructure install guide for Linux


Discrepancies in Oracle 11gR2 grid infrastructure install guide for Linux:

- Oracle instructs you to create VIP's on both nodes as a preinstall task.
However, if you do so, Oracle grid infrastructure installer will tell you the VIP adresses are already in use.

- Even though you have set up passwordless ssh connectivity between two RAC nodes, the installer keeps telling you this is not the case. I guess it has something to do with Oracle using rsa1. I gave up and gave both my oracle users the same password, and clicked on "setup", and let the installer do it for me. Everything went fine afterwards.

- /usr/sbin/oracleasm enable and disable have been deprecated on RHEL 5.3.
You have to use chckconfig oracleasm on.
If you fail to do so, upon reboot, asmlib is not loaded, and your voting disks and OCR are corrupted.

- If you use ACFS have to use different ORACLE_BASE directories for the Oracle grid infrastructure user (ex:grid: /u01/app/grid/), and the Oracle database user (ex: oracle: /u01/app/oracle/).
In the install doc, this is not so clear, as only ORACLE_HOME directories (ex:/u01/app/11.2.0/grid/ for grid and /u01/app/oracle/acfsmounts/orahome1/ for oracle) have to be different, the ORACLE_BASE seeming to be a unique one.

- Even though you can set up a shared ORACLE_HOME through ACFS for the database binaries, you still have to rely on ocfs2 if you want to have the Oracle grid infrastructure binaries on a shared filesystem.

- You absolutely have to be patient ant wait for the root.sh script to finish on your first node (can last half an hour), before you may execute it on your other nodes. Else, your  installation will miserably fail.

A complete RAC installation guide for Oracle 11gR2 on RHEL 5.3 with multipath will follow soon.

mercredi 16 septembre 2009

Correspondance volumes ASM ORACLE , disks et device côté Linux:

/etc/init.d/oracleasm listdisks

VOL1

VOL2

export $ORACLE_SID=+ASM

sqlplus / as sysdba

SQL> show parameter asm_diskstring asm_disk

string string /dev/oracleasm/disks/*

SQL> select path from v$asm_disk;

/dev/oracleasm/disks/VOL1

ls -lsa /dev/oracleasm/disks total 0

  • 0 drwxr-xr-x 1 root root 0 avr 18 12:51 .
  • 0 drwxr-xr-x 1 root root 0 avr 18 12:51 ..
  • 0 brw-rw 1 oracle dba 8, 81 avr 18 12:51 VOL1
  • 0 brw-rw 1 oracle dba 8, 97 avr 18 12:51 VOL2

Voir le major/minor (ex:8,81) et les trouver dans /dev:

ls -lsa /dev |grep " 8,"

  • 0 brw-rw 1 root floppy 8, 80 Jun 24 2004 sdf

    0 brw-rw 1 root disk 8, 81 Jun 24 2004 sdf1 --> VOL1 est monté sur /dev/sdf1

  • 0 brw-rw 1 root disk 8, 90 Jun 24 2004 sdf10
Ou cat /proc/partitions -> voir le major/minor

mercredi 28 mai 2008

Using Pen load balancer as a port-forwarding proxy


Suppose you're a Paris-based firm that has several databases spread across different locations.

For example:
a) several Oracle databases listening on port 1521 on your own network, on addresses 10.75.75.1 - 10,
b) one Oracle database listening on port 1521 in Turku, Finland linked by a VPN  tunnel, on address 172.16.2.2,
c) one oracle database listening on port 1521 in Toulouse linked by a leased line, on address 172.31.31.31,
d) one Oracle database listening on port 21521 on a public internet WAN port in Tanger, with just source-address filtering as security, on address 212.66.66.66,
e) plus about a dozen other databases in different parts of the world at your clients' sites.

You have a partner providing an extra service to your clients, and he has to connect in real time on all of your databases. He doesn't want to spend money on a network connection to each and everyone of your clients. He proposes to pay a leased line to your Paris site, and you will do the dispatching.
Of course, you don't want him to know too much about your network, so you will restrict his access to only one address, which will be a firewall at your end of the line between your two sites .

Let's suppose the outside interface address of your firewall is 192.168.15.100, and the inside address (the one on your network) 10.75.75.254.

Providing  connectivity to the (a) databases on your local  network is pretty easy: you just have to give a port-forwarding rule and an access list to your router.

For example:
port 2001 on the outside interface ---> port 1521 on server1 at  10.75.75.1,
port 2002 on the outside interface ---> port 1521 on server2 at 10.75.75.2,
port 2003 on the outside interface ---> port 1521 on server3 at 10.75.75.3,
and so on...
Then, you just tell your partner to configure his tnsnames.ora to point at address 192.168.15.100 and port 2001 for server1, address 192.168.15.100 and port 2002 for server2, and so on...

However, forwarding the ports to the external (b), (c) and (d) databases is another affair.

Luckily, Pen is there for you. It was designed as load-balancing piece of software for server farms, but its features allow it to be used as a port-forwarding proxy, which is what we need in this case. It is available prepackaged for rpm- as well as deb-based Linux  distros, or as GPL'ed source code. You may learn more on its numerous features on its website: http://siag.nu/pen/

All you need is is a standard PC on your network, with a Linux Distro, let's say Debian, installed on it , as well as one (yes, only one) NIC.

Do an apt-get install pen (or an rpm-Uvh pen on an rpm-based distro).

Let's suppose you've given address 10.75.75.75 to this computer.
It has to know the routes to reach the Turku, Toulouse, and Tanger based servers, and of course the route to reach your partner who wants to connect to them. It has also to be allowed to reach them on the ports on which they are listening (i.e 1521, 1521 and 21521 respectively).

All you need now is to write a little snippet of shell code in a file that you would call for example port-fwrd.sh:

###############
#!/bin/bash

# This is for the Turku-based database
pen 10.75.75.75:2011 172.16.2.2:1521

# This is for the Toulouse-based database
pen 10.75.75.75:2012 172.31.31.31:1521

# This is for the Tanger-based database
pen 10.75.75.75:2013 212.66.66.66:21521

exit
###############

Make port-fwrd.sh executable by a chmod, and launch it: ./port-fwrd.sh
Have it start in your init scripts at rc3 level, so that it will get executed upon reboot of your machine.

All you have to do now on your firewall is to forward:
port 2011 on the outside interface ---> port 2011 on 10.75.75.75
port 2012 on the outside interface ---> port 2012 on 10.75.75.75
port 2013 on the outside interface ---> port 2013 on 10.75.75.75

and tell your partner to configure his tnsnames.ora to reach:
Turku on address 192.168.15.100 and port 2011,
Toulouse on address 192.168.15.100 and port 2012,
Tanger on address 192.168.15.100 and port 2013.

Beautiful and simple, ain't it?

Note: of course, if the server on one of the locations doesn't support shared sockets (as it is the case with for example a Windows 2000 Server failsafe cluster), you won't be able to use portforwarding, since the answering port on the target server will a dynamic one, and thus unpredictable.

Happy computing.

Drop me a comment if this post has been useful to you, or if you see any reason for add-on or modification.

Nixman