Cluster Installation: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 1: Line 1:
The following does ''not'' refer to the  [http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#CLUSTER_NODES= CLUSTER_NODES=] setup. The latter does ''not'' require a queueing system!
The following does ''not'' refer to the  [http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#CLUSTER_NODES= CLUSTER_NODES=] setup. The latter does ''not'' require a queueing system!


XDS can be run in a cluster using any command line job scheduling software such as Grid Engine, Condor, Torque/PBS, LSF, SLURM etc. These are distributed resource management system which monitor the CPU and memory usage of the available computing resources and schedule jobs to the least used computers.  
XDS can be run in a cluster using any batch job scheduling software such as Grid Engine, Condor, Torque/PBS, LSF, SLURM etc. These are distributed resource management system which monitor the CPU and memory usage of the available computing resources and schedule jobs to the least used computers.  


== setup of XDS for a batch queue system ==
== setup of XDS for a batch queue system ==


In order to setup XDS for a queuing system, the ''forkxds'' script needs to be changed to access the environment and send jobs to different machines, and to switch off the ssh-based mechanism which the CLUSTER_NODES keyword employs. Example scripts used for Univa Grid Engine (UGA) at Diamond (from https://github.com/DiamondLightSource/fast_dp/tree/master/etc/uge_array - thanks to Graeme Winter!) are below; they may need to be changed according to the environment. Observe this uses the ''qsub'' command which submits forkxds_job to grid engine.
In order to setup XDS for a queuing system, the ''forkxds'' script needs to be changed to use qsub instead of ssh. Example scripts used for Univa Grid Engine (UGA) at Diamond (from https://github.com/DiamondLightSource/fast_dp/tree/master/etc/uge_array - thanks to Graeme Winter!) are below; they may need to be changed for the specific environment and queueing system.  


<pre>
<pre>
Line 67: Line 67:


</pre>
</pre>


<pre>
<pre>
Line 83: Line 84:
echo $SGE_TASK_ID | $JOB
echo $SGE_TASK_ID | $JOB
</pre>
</pre>
== Performance ==
For the qsub command at the end of the script, it is not necessary to stick to the value of maxcpu as provided by $2 of forkxds (which is supplied by xds_par). If the cluster has nodes with more cores than the node where xds_par was started, it may be more efficient to override maxcpu, by grep-ing MAXIMUM_NUMBER_OF_PROCESSORS from XDS.INP, or by supplying it to the script from an environment variable, or to hard-code the value. If the cluster is homogeneous then this is of course no concern.
2,684

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.

Navigation menu