Performance
Considerations
In the order of effect:
- XDS scales well (i.e. the wallclock time for data processing goes down when the number of available cores is increased) in the COLSPOT, IDXREF, INTEGRATE and CORRECT steps when using the MAXIMUM_NUMBER_OF_PROCESSORS keyword. This triggers program-level parallelization, using OpenMP threads.
- the program scales very well in the COLSPOT and INTEGRATE steps when using the MAXIMUM_NUMBER_OF_JOBS keyword. This triggers a shell-level parallelization. There is a slight penalty associated with high values of MAXIMUM_NUMBER_OF_JOBS= : a) in INTEGRATE, geometry refinement results are not transferred between JOBs: see Pathologies; b) in COLSPOT, the phi values at the borders between JOBs are less accurate (in particular if the mosaicity is high), and the same reflection may be listed twice in SPOT.XDS if it extends over the border between JOBs. The latter effect may be mitigated by having as many SPOT_RANGEs as JOBs, and leaving gaps between the SPOT_RANGEs.
- combining these both keywords gives the highest performance in my experience (see 2VB1#XDS_processing for an example). As a rough guide, I'd choose them to be approximately equal; an even number for MAXIMUM_NUMBER_OF_PROCESSORS should be chosen because that fits better with usual hardware. If in doubt, use a lower number for MAXIMUM_NUMBER_OF_JOBS than for MAXIMUM_NUMBER_OF_PROCESSORS.
- some overcommitting of resources (i.e. MAXIMUM_NUMBER_OF_PROCESSORS * MAXIMUM_NUMBER_OF_JOBS > number of cores) is beneficial; you'll have to play with these two parameters.
- the next thing to consider is DELPHI together with OSCILLATION_RANGE: if DELPHI (the rotation range of a batch of frames) is an integer multiple of MAXIMUM_NUMBER_OF_PROCESSORS * OSCILLATION_RANGE that would be good because it nicely balances the usage of the threads. For this purpose, you may want to change (if possible, raise) the value of DELPHI (default is 5 degrees). If you are doing fine-slicing then mis-balancing of threads is not an issue - but for those users who want to collect 1° frames (which I think is not the best way nowadays ...) it should be a consideration. Additional consideration: the total number of frames should be an integer multiple of the intended number of frames in a batch. Example: 360 frames of 0.5° can be processed on a 8-core machine optimally by specifying DELPHI=4, since then there are 8 frames in a batch and the complete dataset has 45 batches. For weak data one should consider raising DELPHI to 12; that would give 15 batches. A trick: if you want to use DELPHI=8 in this situation then just specify DATA_RANGE=1 368 (pretending 23 batches of 8°) instead of DATA_RANGE=1 360 . XDS will complain about the missing 8 frames, but that has no adverse effects except that no FRAME.cbf will be produced. All of this doesn't matter for a single dataset, but for mass processing of datasets it does make a difference.
- performance-wise, I/O also plays a role because as soon as you run 24 or so processes then a single GB ethernet connection may be limiting. OTOH shell-level parallelization smoothes the load.
- REFINE(INTEGRATE)= ! (empty list) makes INTEGRATE go much faster through the frames, since frames are read less often when processing a batch.
- XDS with the MAXIMUM_NUMBER_OF_JOBS keyword can use several machines. This requires some setup as explained at the bottom of [1].
- Hyperthreading (SMT), if available on Intel CPUs, is beneficial. A "virtual" core has only about 20% performance of a "physical" core but it comes at no cost - you just have to switch it on in the BIOS of the machine.
- The 64-bit binaries generally are a bit faster than the 32-bit binaries (but that's not specific for XDS).