Cluster Installation: Difference between revisions

From XDSwiki
Jump to navigation Jump to search
 
(15 intermediate revisions by 2 users not shown)
Line 1: Line 1:
XDS can be run in cluster mode using any command line job scheduling software such as Grid Engine, Condor, Torque/PBS, LSF, SLURM etc. We implemented Grid Engine. It is a distributed resource management system which monitors the CPU and memory usage of the available computing resources and schedules the job to the least used computer. Grid Engine was chosen due to its high scalability, cost effectiveness, ease of maintenance and high throughput.  Grid Engine was developed by Sun Microsystems (Sun Grid Engine, SGE) and later acquired by Oracle and subsequently acquired by UNIVA. The latest versions became closed source, but the older ones are open source supplied with many Linux distributions including Redhat/CentOS 6.x. There is also open source Open Grid Scheduler [[http://gridscheduler.sourceforge.net/]], Son of Gridengine [[https://arc.liv.ac.uk/trac/SGE ]]
The following does ''not'' refer to the  [http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#CLUSTER_NODES= CLUSTER_NODES=] setup. The latter does ''not'' require a queueing system!


== XDS Cluster setup ==
XDS can be run in a cluster using any batch job scheduling software such as Grid Engine, Condor, Torque/PBS, LSF, SLURM etc. These are distributed resource management system which monitor the CPU and memory usage of the available computing resources and schedule jobs to the least used computers.


In order to setup XDS in cluster mode, ''forkcolspot'' and ''forkintegrate'' scripts need to be changed to access the gridengine environment and send jobs to different machines. Example scripts are below, need to be changed according to the environment. Observe there is ''qsub'' command which submits forkcolspot_job/forkintegrate_job to grid engine.
== setup of XDS for a batch queue system ==
 
In order to setup XDS for a queuing system, the ''forkxds'' script needs to be changed to use qsub instead of ssh. Example scripts used for Univa Grid Engine (UGA) at Diamond (from https://github.com/DiamondLightSource/fast_dp/tree/master/etc/uge_array - thanks to Graeme Winter!) are below; they may need to be changed for the specific environment and queueing system.  


<pre>
<pre>
#forkcolspot
# forkxds
#!/bin/bash
#                    forkxds          Version DLS-2017/08
#
# enables  multi-tasking by splitting the COLSPOT and INTEGRATE
# steps of xds into independent jobs. Each job is carried out by
# a Fortran main program (mcolspot, mcolspot_par, mintegrate, or
# mintegrate_par). The jobs are distributed among the processor
# nodes of the NFS cluster network.
#
# 'forkxds' is called by xds or xds_par by the Fortran instruction
# CALL SYSTEM('forkxds ntask maxcpu main rhosts'),
#    ntask  ::total number of independent jobs (tasks)
#  maxcpu  ::maximum number of processors used by each job
#    main  ::name of the main program to be executed; could be
#            mcolspot | mcolspot_par | mintegrate | mintegrate_par
#  rhosts  ::names of CPU cluster nodes in the NFS network
#
# DLS UGE port of script to operate nicely with cluster
# scheduling system - will work with any XDS usage but is
# aimed for fast_dp see fast_dp#3. Options passed through environment:
#
# FORKXDS_PRIORITY - priority within queue, e.g. 1024
# FORKXDS_PROJECT - UGE project to assign for this
# FORKXDS_QUEUE - queue to submit to


ntask=$1  #total number of jobs
ntask=$1  #total number of jobs
maxcpu=$2 #maximum number of processors used by each job
maxcpu=$2 #maximum number of processors used by each job
  #maxcpu=1: use 'mcolspot' (single processor)
main=$3  #name of the main program to be executed
  #maxcpu>1: use 'mcolspot_par' (openmp version)


pids=""                    #list of background process ID's
rm -f forkxds.params
itask=1
itask=1
echo "MAX CPU $maxcpu $image1"
#Sudhir check for gridengine submit host
submitnodes=`qconf -sh 2> /dev/null`
thishost=`hostname`
isgrid=0
for node in $submitnodes ; do
if [ "$node" == "$thishost" ]
then
isgrid=1
echo "Grid Engine environment detected"
fi
done
while test $itask -le $ntask
while test $itask -le $ntask
do
do
   if [ $maxcpu -gt 1 ]
   echo $main >> forkxds.params
#    then echo "$itask" | mcolspot_par &
#    else echo "$itask" | mcolspot    &
      then
      if [ $isgrid -eq 1 ]
then
qsub -sync y -V -l h_rt=0:20:00 -cwd \
  forkcolspot_job \
  $itask  &
      #else echo "$itask" | qrsh -V -cwd "mcolspot"    &
else echo "$itask" | mcolspot_par &
fi
  else echo "$itask" | mcolspot    &
  fi
  pids="$pids $!"  #append id of the background process just started
 
  itask=`expr $itask + 1`
done
trap "kill -15 $pids" 2 15  # 2:Control-C; 15:kill
wait  #wait for all background processes issued by this shell
rm -f mcolspot.tmp  #this temporary file was generated by xds
rm -rf fork*job*
</pre>
 
----
 
<pre>
#forkcolspot_job
 
#!/bin/csh
 
echo $1
set itask=$1
 
echo $itask | mcolspot_par
</pre>
 
----
 
 
<pre>
#forkintegate
 
fframe=$1 #id number of the first image
ni=$2    #number of images in the data set
ntask=$3  #total number of jobs
niba0=$4  #minimum number of images in a batch
maxcpu=$5 #maximum number of processors used by each job
          #maxcpu=1: use 'mintegrate' (single processor)
          #maxcpu>1: use 'mintegrate_par' (openmp version)
 
minitask=$(($ni / $ntask)) #minimum number of images in a job
mtask=$(($ni % $ntask))    #number of jobs with minitask+1 images
pids=""                    #list of background process ID's
nba=0
litask=0
itask=1
 
#Sudhir check for gridengine submit host
submitnodes=`qconf -sh 2> /dev/null`
thishost=`hostname`
isgrid=0
for node in $submitnodes ; do
if [ "$node" == "$thishost" ]
then
isgrid=1
echo "Grid Engine environment detected"
fi
done
 
while test $itask -le $ntask
do
  if [ $itask -gt $mtask ]
      then nitask=$minitask
      else nitask=$(($minitask + 1))
  fi
  fitask=`expr $litask + 1`
  litask=`expr $litask + $nitask`
  if [ $nitask -lt $niba0 ]
      then n=$nitask
      else n=$niba0
  fi
  if [ $n -lt 1 ]
      then n=1
  fi
  nbatask=$(($nitask / $n))
  nba=`expr $nba + $nbatask`
  image1=$(($fframe + $fitask - 1)) #id number of the first image
 
  if [ $maxcpu -gt 1 ]
      then
      if [ $isgrid -eq 1 ]
then
      qsub -sync y -V -l h_rt=0:20:00 -cwd \
  forkintegrate_job \
  $image1 $nitask $itask $nbatask &
      #else echo "$image1 $nitask $itask $nbatask" | qrsh -V -cwd "mintegrate"    &
      else echo "$image1 $nitask $itask $nbatask" | mintegrate_par  &
      fi
      else echo "$image1 $nitask $itask $nbatask" | mintegrate  &
  fi
  pids="$pids $!"  #append id of the background process just started
 
   itask=`expr $itask + 1`
   itask=`expr $itask + 1`
done
done
trap "kill -15 $pids" 2 15  # 2:Control-C; 15:kill
wait  #wait for all background processes issued by this shell
rm -f mintegrate.tmp  #this temporary file was generated by mintegrate
rm -rf fork*job*
</pre>
<pre>
#forkintegrate_job
#!/bin/csh
set image1=$1
set nitask=$2
set itask=$3
set nbatask=$4
set host=`uname -a | awk '{print $2}'`
echo $image1 $nitask $itask $nbatask $host >> jobs.log
echo $image1 $nitask $itask $nbatask | mintegrate_par
</pre>


== Grid Engine Installation ==
# save environment
echo "PATH=$PATH" > forkxds.env
echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" >> forkxds.env


Grid Engine consists of a master node daemon named ''sgemaster'' which schedules jobs to execution nodes.  On each execution node a daemon named ''sge_execd'' runs a job and sends a completion signal back to sgemaster. Jobs are submitted to sgemaster using command such as qsub or using DRMAA C, JAVA or IDL bindings from any applications want to run XDS.
# check environment for queue; project; priority information
qsub_opt=""
if [[ -n "$FORKXDS_PRIORITY" ]] ; then
    qsub_opt="$qsub_command -p $FORKXDS_PRIORITY"
fi


[[File:Gridengine arch1.png]]
if [[ -n "$FORKXDS_PROJECT" ]] ; then
    qsub_opt="$qsub_command -P $FORKXDS_PROJECT"
fi


if [[ -n "$FORKXDS_QUEUE" ]] ; then
    qsub_opt="$qsub_command -q $FORKXDS_QUEUE"
fi


Redhas/CentOS Linux distribution comes with rpms for installing Grid Engine. One need to have administrative privileges to install. Install gridengine rpms on all the nodes using following command, Default shell for Grid Engine is /bin/csh. '''It is assumed that all the workstations involved access the storage (using NFS or other cluster file systems) where the data is stored and authentication is done through protocols like LDAP.'''
qsub $qsub_opt -sync y -V -cwd -pe smp $maxcpu -t 1-$ntask `which forkxds_job`
<pre>
root@ws1:/home 1> yum install gridengine gridengine-qmaster gridengine-execd  gridengine-qmon


root@ws1:/home 2> rpm -qa | grep gridengine
gridengine-qmaster-6.2u5-10.el6.4.x86_64
gridengine-qmon-6.2u5-10.el6.4.x86_64
gridengine-execd-6.2u5-10.el6.4.x86_64
gridengine-6.2u5-10.el6.4.x86_64
</pre>
</pre>


By default gridengine installation directory /usr/share/gridengine, contents shown below.


<pre>
<pre>
root@ws1:/home 3> cd /usr/share/gridengine
# forkxds_job


root@ws1:/home 4> ls
#!/bin/bash
bin   default  hadoop    install_execd    lib  my_configuration.conf  qmon  utilbin
ckpt  doc      inst_sge  install_qmaster  mpi  pvm                    util
</pre>
Lets say ''ws1'' is ''sgemaster'' node, it will installed using install_qmaster


==== Installing sgemaster ====
params=$(awk "NR==$SGE_TASK_ID" forkxds.params)
JOB=`echo $params | awk '{print $1}'`


<pre>
# load environment
root@ws1:/usr/share/gridengine 5>./install_qmaster
. forkxds.env
</pre>


Most of the answers are yes/no or typing enter. Following things need to be decided before installation
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH
 
export PATH=$PATH
* Admin user is root
echo $SGE_TASK_ID | $JOB
* Following important environment variables are written to /usr/share/gridengine/default/common/settings.csh which should be in the $PATH.
** $SGE_ROOT=/usr/share/gridengine
** $SGE_QMASTER_PORT=6444
** $SGE_EXECD_PORT=6445
** $SGE_CELL=default
* JMX MBean server not used
* Spooling method used is ''classic''
* There is an option to give administrative email which is very useful, when ever there is any problem gridengine will send error messages to email.
* Ready with a file contains admin and submit hosts or you can manually enter all the hosts separated by space, use full DNS names of hosts.
* In this installation shadow host is not used.
* After the shadow host step make sure allhosts group and all.q are created otherwise installation sge_execd will have problems.
* Scheduler Tuning selected as 'Max', it has disadvantage, gridengine immediately schedules with out assuming the load, this will cause successive job submissions will go to same host until all the slots are filled for that machine. Selecting 'Normal' will assume the load but there is overhead of few sec. extra time for job scheduling.
 
 
After finishing the installation the configuration files are automatically written to the directory /usr/share/gridengine/default since the cell name selected is 'default'. This directory can be choosen as a shared directory over NFS. Otherwise copy this directory to every host used in the cluster.
 
==== Installing sge_execd ====
 
On execution node install execution daemon using following command
<pre>
root@ws2:/usr/share/gridengine 5>./install_execd
</pre>
 
the input is almost typing return if you already copied the 'default' directory to this node.
 
== Restarting Grid Engine ==
 
When grid engine installed first time /etc/init.d/sgemaster and /etc/init.d/sge_execd services are automatically installed.
If you want to restart sgemaster make sure all the sge_execd deamons are stoped. You can do this by following commands
<pre>
service sge_execd stop
service sgemaster stop
</pre>
for starting
<pre>
service sge_execd start
service sgemaster start
</pre>
</pre>
When ever work stations need to be restarted make sure sgemaster work station started first. To keep the services restarted automatically during the startup make sure chkconfig is on.
<pre>
chkconfig sgemaster on
chkconfig sge_execd on
</pre>
== Son of Gridengine ==
rpms available in this link


http://arc.liv.ac.uk/downloads/SGE/releases/8.1.8/
== Performance ==


by defualt these rpms install in single directory /opt/sge instead of scattering files (by default) to /usr/bin, /usr/share/gridengine, /usr/spool/gridengine
Cluster nodes may have different numbers of processors.
Please note that the output line
number of OpenMP threads used  NN
in COLSPOT.LP and INTEGRATE.LP may be incorrect if MAXIMUM_NUMBER_OF_JOBS > 1, and the submitting node (the node that runs xds_par) has a different number of processors than the processing node(s) (the nodes that run mcolspot_par and mintegrate_par). The actual numbers of threads on the processing nodes may be obtained by
grep PARALLEL COLSPOT.LP
grep USING INTEGRATE.LP | uniq


Default shell for Son of Gridengine is /bin/sh which is /bin/bash
The algorithm that determines the number of threads used on a processing node is:
NB = DELPHI / OSCILLATION_RANGE  # this may be slightly adjusted by XDS if DATA_RANGE / NB is not integer
NCORE = number of processors in the processing node, obtained by OMP_GET_NUM_PROCS()
if MAXIMUM_NUMBER_OF_PROCESSORS is not specified in XDS.INP then MAXIMUM_NUMBER_OF_PROCESSORS = NCORE
number_of_threads = MIN( NB, NCORE, MAXIMUM_NUMBER_OF_PROCESSORS, 99 )
This is implemented in BUILT=20191015 onwards.

Latest revision as of 14:48, 29 November 2019

The following does not refer to the CLUSTER_NODES= setup. The latter does not require a queueing system!

XDS can be run in a cluster using any batch job scheduling software such as Grid Engine, Condor, Torque/PBS, LSF, SLURM etc. These are distributed resource management system which monitor the CPU and memory usage of the available computing resources and schedule jobs to the least used computers.

setup of XDS for a batch queue system

In order to setup XDS for a queuing system, the forkxds script needs to be changed to use qsub instead of ssh. Example scripts used for Univa Grid Engine (UGA) at Diamond (from https://github.com/DiamondLightSource/fast_dp/tree/master/etc/uge_array - thanks to Graeme Winter!) are below; they may need to be changed for the specific environment and queueing system.

# forkxds
#!/bin/bash
#                    forkxds          Version DLS-2017/08
#
# enables  multi-tasking by splitting the COLSPOT and INTEGRATE
# steps of xds into independent jobs. Each job is carried out by 
# a Fortran main program (mcolspot, mcolspot_par, mintegrate, or
# mintegrate_par). The jobs are distributed among the processor 
# nodes of the NFS cluster network.
#
# 'forkxds' is called by xds or xds_par by the Fortran instruction
# CALL SYSTEM('forkxds ntask maxcpu main rhosts'),
#    ntask  ::total number of independent jobs (tasks)
#   maxcpu  ::maximum number of processors used by each job
#    main   ::name of the main program to be executed; could be
#             mcolspot | mcolspot_par | mintegrate | mintegrate_par
#   rhosts  ::names of CPU cluster nodes in the NFS network 
#
# DLS UGE port of script to operate nicely with cluster 
# scheduling system - will work with any XDS usage but is 
# aimed for fast_dp see fast_dp#3. Options passed through environment:
#
# FORKXDS_PRIORITY - priority within queue, e.g. 1024
# FORKXDS_PROJECT - UGE project to assign for this
# FORKXDS_QUEUE - queue to submit to

ntask=$1  #total number of jobs
maxcpu=$2 #maximum number of processors used by each job
main=$3   #name of the main program to be executed

rm -f forkxds.params
itask=1
while test $itask -le $ntask
do
   echo $main >> forkxds.params
   itask=`expr $itask + 1`
done

# save environment
echo "PATH=$PATH" > forkxds.env
echo "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" >> forkxds.env

# check environment for queue; project; priority information
qsub_opt=""
if [[ -n "$FORKXDS_PRIORITY" ]] ; then
    qsub_opt="$qsub_command -p $FORKXDS_PRIORITY"
fi

if [[ -n "$FORKXDS_PROJECT" ]] ; then
    qsub_opt="$qsub_command -P $FORKXDS_PROJECT"
fi

if [[ -n "$FORKXDS_QUEUE" ]] ; then
    qsub_opt="$qsub_command -q $FORKXDS_QUEUE"
fi

qsub $qsub_opt -sync y -V -cwd -pe smp $maxcpu -t 1-$ntask `which forkxds_job`


# forkxds_job

#!/bin/bash

params=$(awk "NR==$SGE_TASK_ID" forkxds.params)
JOB=`echo $params | awk '{print $1}'`

# load environment
. forkxds.env

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH
export PATH=$PATH
echo $SGE_TASK_ID | $JOB

Performance

Cluster nodes may have different numbers of processors. Please note that the output line

number of OpenMP threads used  NN

in COLSPOT.LP and INTEGRATE.LP may be incorrect if MAXIMUM_NUMBER_OF_JOBS > 1, and the submitting node (the node that runs xds_par) has a different number of processors than the processing node(s) (the nodes that run mcolspot_par and mintegrate_par). The actual numbers of threads on the processing nodes may be obtained by

grep PARALLEL COLSPOT.LP
grep USING INTEGRATE.LP | uniq

The algorithm that determines the number of threads used on a processing node is:

NB = DELPHI / OSCILLATION_RANGE   # this may be slightly adjusted by XDS if DATA_RANGE / NB is not integer
NCORE = number of processors in the processing node, obtained by OMP_GET_NUM_PROCS()
if MAXIMUM_NUMBER_OF_PROCESSORS is not specified in XDS.INP then MAXIMUM_NUMBER_OF_PROCESSORS = NCORE 
number_of_threads = MIN( NB, NCORE, MAXIMUM_NUMBER_OF_PROCESSORS, 99 )

This is implemented in BUILT=20191015 onwards.