Pages

Thursday, April 12, 2012

Building a Beowulf cluster with Ubuntu 11.04 Server

This lesson is going to document how to build a small Beowulf cluster, something I have wanted to do for quite some time. A Beowulf cluster is a cluster of identical (or nearly identical)computers linked together in such a way as to permit parallel processing.

When I say nearly identical, I mean each computer should be roughly comparable in terms of speed and memory capacity. This will help ensure consistent performance across the cluster. The one exception is the master computer. If you have one computer which is better than the others, it should be your master node, as this will where where you compile and run programs from. It will also be th only one to need a keyboard and monitor permanently attached once everything is up and running.

I have a 1.3 Ghz AMD with 20 GB hard drive and 256 MB RAM with shared video memory as my slave node and a 1.6 Ghz P4 with 80 GB hard drive 256 MB Ram and a 32 MB dedicated video card as the master.

Since this is Advanced Linux, I am assuming that you know something of Ubuntu and/or Linux in general and are comfortable navigating it, installing software, compiling from source code, and editing files. Also, before editing system files, namely those under the /etc directory that we will be working with, always back them up before making changes.

I am using Ubuntu Server 11.04 and the MPICH2 clustering software. Right now, I only have two PCs but keep in mind the more machines you have, the more powerful your cluster will be. I will assume two machines designated as master_node and slave_node with a common user mpi. Replace with your own system names as appropriate.

Now, the actual setup:

The first step was to do a fresh install of Ubuntu Server 11.04 on two blank machines. As part of the install process, you are always prompted to create at least one user for the system, and this user will have administrative access via the sudo command. The root account is locked out by default, so everything will be done with this default user account. Keep in mind that Ubuntu Server is a command line only system, so all this will be done from a text prompt. You also need to be sure you have identical user accounts on all machines for this to work.

I am using NFS to provide a common directory to all machines, and it also means I only need to configure and install the MPICH2 software in one location. I am also using a router with DHCP to configure the network interfaces. If you use a switch or crossover cables. you will probably need to configure network interfaces manually.

There are some things that need to be set up on each machine.

Open the /etc/hosts file of each machine and add the ip address and hostname of every machine.
Also take out the line which associates 127.0.0.1 with the machines hostname. Leave the one which says 127.0.0.1 localhost. This ensures that the loopback address only refers to the local machine.

Ex.:

127.0.0.1 localhost
192.168.1.105 master_node
192.168.1.103 slave_node

There may be other lines in this file, but these must be there for the cluster to function. Replace these ip addresses with your own.

Also modify the /etc/hosts.allow file and add the line ALL : ALL

With those steps out of the way, we can move on.

To configure the machines that will act as slave nodes:

Install package nfs-common
Install package openssh-server


If these were already installed, Ubuntu will simply tell you they are already the newest version.
Running the install commands just to be sure doesn't hurt anything.

Once nfs and the ssh server are installed, configuration of the slave nodes can proceed.

The first thing to do is set up an nfs mount in the /etc/fstab file. I am using the home directory of the default user on my main node as an nfs share, which is one reason why user accounts need to be identical across all machines. This way, every machine has the exact same home directory.

Change to the /etc directory and modify the fstab file.

You want to append a line like this to the end of the file for the nfs directory to be mounted

master_node:/home/mpi /home/mpi nfs user,exec


That sets up the slave nodes to receive the directory exported by the master node.

Change back to your home directory.

Now set up a public key for passwordless ssh logins. The reason for passwordless ssh is to be able to use all the nodes without having to login to each one every time you run MPICH2. The running of the cluster should be automatic across all nodes.

ssh-keygen -t dsa will generate a private/public key pair

When prompted for a passphrase, hit enter to leave it blank.

change to the .ssh directory and do the following:

cat id_dsa.pub >> authorized_keys

Now configure openssh by modifying the /etc/ssh/sshd_config file.

Change to the /etc/ssh directory and open the sshd_config file
Make the following changes to these lines:

RSAAuthentication yes
PubKeyAuthentication yes

Uncomment this line:
AuthorizedKeyfile %h/.ssh/authorized_keys

Uncomment this line and change yes to no:
PasswordAuthentication yes

Set the UsePAM line to no
Also set StrictModes to no

issue sudo /etc/init.d/ssh restart to restart the ssh server.

If you do this on each machine you intend to use as a slave node, they should be all set.


Configuring the master node.

Install the NFS server on the master node and configure the export:

Install package nfs-kernel-server


add this line to the /etc/exports file:

/home/mpi *(rw,insecure,sync)

sudo exportfs -r to export the directory to all slave nodes

Installing MPICH2

Make sure you install the package build-essential on the main node, otherwise you will have no build tools or compilers.

Download MPICH2

http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads

I used stable version 1.4.1p1

unpack the tar file.

There are actually two different ways to build MPICH2. The typical ./configure,make,sudo make install works just fine, but MPICH2 docs recommend this:

./configure 2>1& | tee c.txt
make 2>1& | tee m.txt
make install 2>1& | tee mi.txt

It makes no difference to the software ,but will if you have trouble building MPICH2 and seek support from the main site. Runnnig the command this way generates the files c.txt, m.txt, and mi.txt, which they will expect you to have in order to determine the problem.

Make a new directory in your home directory for the mpich2 install. I used mpich2

change into the mpich2-1.4.1p1 directory and run ./configure with the options you need.

For my system the command was

./configure --prefix=/home/mpi/mpich2 --disable-f77 --disable-fc

The fc and f77 options were used since I don't have Fortran compilers installed. The prefix tells install where to place the program files. Other options can be found by viewing the mpich2 README file.


Once the configuration is done, simply do make followed by sudo make install, or use the alternate above if you wish.

Everything should now be in place.

The last steps to setting everything up are to put the mpich2 folder on the path so that it can be found by the system

export PATH=/home/mpi/mpich2/bin:$PATH
export PATH
LD_LIBRARY_PATH="/home/mpi/mpich2/lib:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH

sudo echo /home/mpi/mpich2/bin >> /etc/environment

Everything should now be installed and ready to go.

To test this, use the following commands:

which mpirun
which mpiexec

Very last thing to do is set up a hosts file for the cluster in the user directory.

The file should be named hosts and should be set up as follows;

One line for each machine in the network listed by hostname

Ex:

master_node
slave_node

To test the cluster, run the example program cpi

mpiexec -f hosts -n 2 ./mpich2-1.4.1p1/examples/cpi

-f hosts tells mpich2 which host file and thus which machines to use. -n is the number of processes to run. This is usually equal to the number of machines available, but doesn't have to be.

If all has gone well, there should be a listing of each process and where it ran followed by the program output.

If not, let me know and I can help you work it out.

These steps should work as is on any recent version of Ubuntu and probably most other Debian based distributions. Other distributions will differ in some details, but I can provide advice for many distributions.

8 comments:

jai said...

hi thanks for posting this useful material.
I am trying to build same kind of cluster with UBUNTU 12.04 Desktop version but something went wrong. i am unable to run "cpi" for all the connected node it is running on master node only. :(:(:(

Unknown said...

Dear sir,

Thanks for this post.
I am having a problem with regards to authentication, wince when i run mpiexec from the master i receive Permission denied (publickey).

One more question, the ssh-keygen -t dsa should be run on the master/slave or both and also the /etc/ssh/sshd_config of the slave or of the master is changed?

Thanks a lot,
Paul.

Magno Tamagusko said...

thanks a lot, i'll try it soon. Merry Christimas.

Amardeep said...

I tried the same procedure as is mentioned in your blog with same version of mipich2 but i got following outputs in the end.

bif@cluster:~$ mpiexec -f hosts -n 2 ./mpich2-1.4.1p1/examples/cpi
The authenticity of host 'node1 (172.16.13.231)' can't be established.
ECDSA key fingerprint is 92:10:7f:bc:71:93:49:05:85:34:84:7f:63:a5:c7:25.
Are you sure you want to continue connecting (yes/no)? yes

bif@cluster:~$ mpiexec -f hosts -n 2 ./mpich2-1.4.1p1/examples/cpi
Permission denied (publickey).

Amardeep said...

Please help me resolve this issue, I think my master node is unable to connect with slave node.

mercury group said...

I currently use windows 7 .is it possible to install Linux in my computer inside the windows 7. security systems for business ct needed for me.

Xen0n said...

I seem to be getting a error

ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory.
Host key verification failed.

What did I do wrong?

digitalhts said...

Blog commenting is an excellent. Thank you for sharing this wonderful piece of information and for sharing helpful insights. Otherwise If any one who want to achieve a goal in linux contact us on 9311002620 or https://htsindia.com/Courses/modular-courses/redhat-linux-administration-training-course

Post a Comment