2,652
edits
(remove out-of-date grid engine installation stuff; update to show forkxds and DLS UGE usage (thanks to Graeme W!)) |
|||
(7 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
The following does ''not'' refer to the [http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#CLUSTER_NODES= CLUSTER_NODES=] setup. The latter does ''not'' require a queueing system! | |||
XDS can be run in a cluster using any batch job scheduling software such as Grid Engine, Condor, Torque/PBS, LSF, SLURM etc. These are distributed resource management system which monitor the CPU and memory usage of the available computing resources and schedule jobs to the least used computers. | |||
In order to setup XDS | == setup of XDS for a batch queue system == | ||
In order to setup XDS for a queuing system, the ''forkxds'' script needs to be changed to use qsub instead of ssh. Example scripts used for Univa Grid Engine (UGA) at Diamond (from https://github.com/DiamondLightSource/fast_dp/tree/master/etc/uge_array - thanks to Graeme Winter!) are below; they may need to be changed for the specific environment and queueing system. | |||
<pre> | <pre> | ||
#forkxds | # forkxds | ||
#!/bin/bash | #!/bin/bash | ||
# forkxds Version DLS-2017/08 | # forkxds Version DLS-2017/08 | ||
Line 62: | Line 64: | ||
fi | fi | ||
qsub $qsub_opt -sync y -V -cwd -pe smp $maxcpu -t 1-$ntask | qsub $qsub_opt -sync y -V -cwd -pe smp $maxcpu -t 1-$ntask `which forkxds_job` | ||
`which forkxds_job` | |||
</pre> | </pre> | ||
<pre> | <pre> | ||
Line 82: | Line 84: | ||
echo $SGE_TASK_ID | $JOB | echo $SGE_TASK_ID | $JOB | ||
</pre> | </pre> | ||
== Performance == | |||
Cluster nodes may have different numbers of processors. | |||
Please note that the output line | |||
number of OpenMP threads used NN | |||
in COLSPOT.LP and INTEGRATE.LP may be incorrect if MAXIMUM_NUMBER_OF_JOBS > 1, and the submitting node (the node that runs xds_par) has a different number of processors than the processing node(s) (the nodes that run mcolspot_par and mintegrate_par). The actual numbers of threads on the processing nodes may be obtained by | |||
grep PARALLEL COLSPOT.LP | |||
grep USING INTEGRATE.LP | uniq | |||
The algorithm that determines the number of threads used on a processing node is: | |||
NB = DELPHI / OSCILLATION_RANGE # this may be slightly adjusted by XDS if DATA_RANGE / NB is not integer | |||
NCORE = number of processors in the processing node, obtained by OMP_GET_NUM_PROCS() | |||
if MAXIMUM_NUMBER_OF_PROCESSORS is not specified in XDS.INP then MAXIMUM_NUMBER_OF_PROCESSORS = NCORE | |||
number_of_threads = MIN( NB, NCORE, MAXIMUM_NUMBER_OF_PROCESSORS, 99 ) | |||
This is implemented in BUILT=20191015 onwards. |