Performance: Difference between revisions

Jump to navigation Jump to search
240 bytes removed ,  16 August 2022
 
Line 8: Line 8:
#*in COLSPOT, the phi values at the borders between JOBs are less accurate (in particular if the mosaicity is high), and the same reflection may be listed twice in SPOT.XDS if it extends over the border between JOBs. The latter effect may be mitigated by having as many SPOT_RANGEs as JOBs, and leaving gaps between the SPOT_RANGEs; see [[Problems#IDXREF_produces_too_long_axes]].
#*in COLSPOT, the phi values at the borders between JOBs are less accurate (in particular if the mosaicity is high), and the same reflection may be listed twice in SPOT.XDS if it extends over the border between JOBs. The latter effect may be mitigated by having as many SPOT_RANGEs as JOBs, and leaving gaps between the SPOT_RANGEs; see [[Problems#IDXREF_produces_too_long_axes]].
# combining these two keywords gives the highest performance (see [[2VB1#XDS_processing]] for an example). As a rough guide, I'd choose them to be approximately equal; an even number for MAXIMUM_NUMBER_OF_PROCESSORS should be chosen because that fits better with usual hardware. If in doubt, use a lower number for MAXIMUM_NUMBER_OF_JOBS than for MAXIMUM_NUMBER_OF_PROCESSORS. Since 2017, XDS has an automatic feature that divides up the available cores into JOBS (operating system processes) running with multiple cores each. You use this automatic feature when you run <code>xds_par</code>, and don't specify MAXIMUM_NUMBER_OF_JOBS.
# combining these two keywords gives the highest performance (see [[2VB1#XDS_processing]] for an example). As a rough guide, I'd choose them to be approximately equal; an even number for MAXIMUM_NUMBER_OF_PROCESSORS should be chosen because that fits better with usual hardware. If in doubt, use a lower number for MAXIMUM_NUMBER_OF_JOBS than for MAXIMUM_NUMBER_OF_PROCESSORS. Since 2017, XDS has an automatic feature that divides up the available cores into JOBS (operating system processes) running with multiple cores each. You use this automatic feature when you run <code>xds_par</code>, and don't specify MAXIMUM_NUMBER_OF_JOBS.
# NUMBER_OF_IMAGES_IN_CACHE avoids repeated (3-fold) reading of data frames in the INTEGRATE task during processing of a batch of frames. This comes at the expense of memory (RAM) and is discussed in [[Eiger]]. The default is DELPHI/OSCILLATION_RANGE+1 and is usually adequate. Only on low-memory systems (e.g a 8GB RAM machine for processing Eiger 16M data collected with 0.1° oscillation range, at DELPHI=5 and MAXIMUM_NUMBER_OF_JOBS=1) should this be set to 0, to conserve memory and avoid slow processing due to thrashing, or even killed XDS processes. If the cache size of a process exceeds 8GB, XDS will print a warning, and in that case the user has to explicitly include a NUMBER_OF_IMAGES_IN_CACHE=<desired number> line in XDS.INP, to confirm that actually so much memory should be used. '''Typically, you don't specify NUMBER_OF_IMAGES_IN_CACHE'''.  
# NUMBER_OF_IMAGES_IN_CACHE avoids repeated (3-fold) reading of data frames in the INTEGRATE task during processing of a batch of frames. This comes at the expense of memory (RAM) and is discussed in [[Eiger]]. The default is DELPHI/OSCILLATION_RANGE+1 and is usually adequate. Only on low-memory systems (e.g a 8GB RAM machine for processing Eiger 16M data collected with 0.1° oscillation range, at DELPHI=5 and MAXIMUM_NUMBER_OF_JOBS=1) should this be set to 0, to conserve memory and avoid slow processing due to thrashing, or even killed XDS processes. '''Typically, you don't specify NUMBER_OF_IMAGES_IN_CACHE'''.  
# XDS with the MAXIMUM_NUMBER_OF_JOBS and CLUSTER_NODES keywords can use [[Performance#Cluster|several machines]]. This requires some setup as explained at the bottom of [http://xds.mpimf-heidelberg.mpg.de/html_doc/downloading.html].
# XDS with the MAXIMUM_NUMBER_OF_JOBS and CLUSTER_NODES keywords can use [[Performance#Cluster|several machines]]. This requires some setup as explained at the bottom of [http://xds.mpimf-heidelberg.mpg.de/html_doc/downloading.html].
# Hyperthreading (SMT), if available, is often beneficial. A "virtual" core has only about 20% performance of a "physical" core but it comes at no cost - you just have to switch it on in the BIOS of the machine.
# Hyperthreading (SMT), if available, is often beneficial. A "virtual" core has only about 20% performance of a "physical" core but it comes at no cost - you just have to switch it on in the BIOS of the machine.
2,684

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.

Navigation menu