2VB1: Difference between revisions
(start work) |
|||
Line 13: | Line 13: | ||
For all the sweeps, processing stopped with the usual error message after the IDXREF step. | For all the sweeps, processing stopped with the usual error message after the IDXREF step. | ||
== timings for sweep "e" | == timings for processing sweep "e" as a function of MAXIMUM_NUMBER_OF_PROCESSORS and MAXIMUM_NUMBER_OF_JOBS == | ||
The following is going to be rather technical! If you are only interested in crystallography, skip this. | The following is going to be rather technical! If you are only interested in crystallography, skip this. | ||
Line 42: | Line 42: | ||
MAXIMUM_NUMBER_OF_JOBS=1 ! the default | MAXIMUM_NUMBER_OF_JOBS=1 ! the default | ||
the times are | the times are | ||
total cpu time used 2833.4 sec | |||
total elapsed wall-clock time 566.5 sec | |||
Using | Using | ||
Line 61: | Line 62: | ||
MAXIMUM_NUMBER_OF_PROCESSORS=4 | MAXIMUM_NUMBER_OF_PROCESSORS=4 | ||
MAXIMUM_NUMBER_OF_JOBS=6 | MAXIMUM_NUMBER_OF_JOBS=6 | ||
performs best for a 2-Xeon X5570 machine with 24GB of memory and a RAID1 consisting of 2 1TB SATA disks. It should be noted that the dataset has 27GB, and in 296 seconds this means 92 MB/s continuous reading. The processing time is thus limited by the disk access, not by the CPU. And no, the data are not simply read from RAM (tested by "echo 3 > /proc/drop_caches before the XDS run). | performs best for a 2-Xeon X5570 machine with 24GB of memory and a RAID1 consisting of 2 1TB SATA disks. It should be noted that the dataset has 27GB, and in 296 seconds this means 92 MB/s continuous reading. The processing time is thus limited by the disk access, not by the CPU. And no, the data are not simply read from RAM (tested by "echo 3 > /proc/sys/vm/drop_caches before the XDS run). |
Revision as of 18:56, 18 February 2011
XDS processing
- use generate_XDS.INP to obtain a good starting point
- edit XDS.INP and change the following:
ORGX=3130 ORGY=3040 ! for ADSC, header values are subject to interpretation; better inspect the table in IDXREF.LP! TRUSTED_REGION=0 1.5 ! we want the whole detector area ROTATION_AXIS=-1 0 0 ! at this beamline the spindle goes backwards!
- for faster processing on a machine with many cores, use (e.g. for 16 cores):
MAXIMUM_NUMBER_OF_PROCESSORS=2 MAXIMUM_NUMBER_OF_JOBS=8
For all the sweeps, processing stopped with the usual error message after the IDXREF step.
timings for processing sweep "e" as a function of MAXIMUM_NUMBER_OF_PROCESSORS and MAXIMUM_NUMBER_OF_JOBS
The following is going to be rather technical! If you are only interested in crystallography, skip this.
Using
MAXIMUM_NUMBER_OF_PROCESSORS=2 MAXIMUM_NUMBER_OF_JOBS=8
we observe for the INTEGRATE step:
total cpu time used 2063.6 sec total elapsed wall-clock time 296.1 sec
Using
MAXIMUM_NUMBER_OF_PROCESSORS=1 MAXIMUM_NUMBER_OF_JOBS=16
the times are
total cpu time used 2077.1 sec total elapsed wall-clock time 408.2 sec
Using
MAXIMUM_NUMBER_OF_PROCESSORS=4 MAXIMUM_NUMBER_OF_JOBS=4
the times are
total cpu time used 2102.8 sec total elapsed wall-clock time 315.6 sec
Using
MAXIMUM_NUMBER_OF_PROCESSORS=16 ! the default for xds_par on a 16-core machine MAXIMUM_NUMBER_OF_JOBS=1 ! the default
the times are
total cpu time used 2833.4 sec total elapsed wall-clock time 566.5 sec
Using
MAXIMUM_NUMBER_OF_PROCESSORS=4 MAXIMUM_NUMBER_OF_JOBS=8
(thus overcommitting the available cores by a factor of 2) the times are
total cpu time used 2263.5 sec total elapsed wall-clock time 320.8 sec
Using
MAXIMUM_NUMBER_OF_PROCESSORS=4 MAXIMUM_NUMBER_OF_JOBS=6
(thus overcommitting the available cores, but less severely) the times are
total cpu time used 2367.6 sec total elapsed wall-clock time 267.2 sec
Thus,
MAXIMUM_NUMBER_OF_PROCESSORS=4 MAXIMUM_NUMBER_OF_JOBS=6
performs best for a 2-Xeon X5570 machine with 24GB of memory and a RAID1 consisting of 2 1TB SATA disks. It should be noted that the dataset has 27GB, and in 296 seconds this means 92 MB/s continuous reading. The processing time is thus limited by the disk access, not by the CPU. And no, the data are not simply read from RAM (tested by "echo 3 > /proc/sys/vm/drop_caches before the XDS run).