2,684
edits
Line 100: | Line 100: | ||
== Scaling many datasets == | == Scaling many datasets == | ||
The program has no internal limit for the number of datasets. However, many items are calculated for ''each pair of datasets''. This results in some component of the CPU time being quadratic in the number of datasets. | The program has no internal limit for the number of datasets. However, many items are calculated for ''each pair of datasets''. This results in some component of the CPU time being quadratic in the number of datasets. Nevertheless, it is perfectly possible to scale >1000 datasets. | ||
If this is too slow, one could use an ''incremental'' way: | |||
# Scale the first (say) 100 datasets together, including any low-resolution datasets | # Scale the first (say) 100 datasets together, including any low-resolution datasets | ||
Line 106: | Line 108: | ||
# Go to step 2. as long as you have more datasets | # Go to step 2. as long as you have more datasets | ||
This should | This should reduce wallclock requirements, and simply keeps adding datasets to an ever-growing merged dataset. The nice thing is that you can monitor how the completeness keeps growing with each step, and once good completeness is obtained, CC1/2 should start growing. | ||
Example showing the principle: | Example showing the principle: | ||
Line 145: | Line 147: | ||
... | ... | ||
</nowiki> | </nowiki> | ||
=== Possible problems in scaling many datasets === | |||
XSCALE may finish with an error message !!! ERROR !!! INSUFFICIENT NUMBER OF COMMON STRONG REFLECTIONS . This usually indicates that one or more datasets have too few reflections. Please inspect the table | |||
<nowiki> | |||
DATA MEAN REFLECTIONS INPUT FILE NAME | |||
SET# INTENSITY ACCEPTED REJECTED | |||
</nowiki> | |||
and check the column "ACCEPTED REFLECTIONS". Then remove the dataset(s) with fewest accepted reflections, and re-run the program. Repeat if necessary. | |||
XSCALE may also finish with the error message !!! ERROR !!! INACCURATE SCALING FACTORS. This usually indicates that one or more datasets are linearly depending on others (this happens if the ''same'' data are included more than once as INPUT_FILE), or are pure noise. I have an experimental version of XSCALE that prints out the numbers of these bad datasets. | |||
== A hint for long-time XSCALE users == | == A hint for long-time XSCALE users == |