FAQ: Difference between revisions

2,208 bytes added ,  22 October 2019
m
fix links
m (fix links)
 
(10 intermediate revisions by the same user not shown)
Line 5: Line 5:
=== The background range is defined as first 5 degrees by default. Is this an assumption that the background (used in Integrate?) remains consistent over the entire dataset valid? The spots over the entire dataset still have background corrected based on first 5 degrees? ===
=== The background range is defined as first 5 degrees by default. Is this an assumption that the background (used in Integrate?) remains consistent over the entire dataset valid? The spots over the entire dataset still have background corrected based on first 5 degrees? ===


No. The INIT step uses the first 5 degrees (by default) for a number of purposes (check out the files written by INIT !). BKGINIT.cbf is essentially used only for scaling purposes; the ''real'' background calculation just requires those frames which have the reflections that are integrated.
No. The INIT step uses the first 5 degrees (by default) for a number of purposes (check out the files written by INIT !). BKGINIT.cbf is used for scaling purposes and beamstop masking (in DEFPIX); the ''real'' background calculation just requires those frames which have the reflections that are integrated.
 
=== I can see many diffraction spots in the BKGINIT.cbf when the BACKGROUND_RANGE is set default (first 5 degree images). When this value is set large (eg. 50 degree), no diffraction spots can be seen. It seems that only the large value produces a "real" background image. But both these two scenarios produce nearly the same results (data statistics). Do I need to always change the BACKGROUND_RANGE so that there are no diffraction spots in the BKGINIT.cbf? ===
 
No. Again, BKGINIT.cbf is used only for scaling purposes and masking the beamstop (in DEFPIX); the ''real'' background calculation just requires those frames which have the reflections that are integrated. Normally you don't worry about BACKGROUND_RANGE; just use the default! You need only increase the BACKGROUND_RANGE if you have so low background that a) even 5° of data produces many zero pixels in BKGINIT.cbf (but this is highly unlikely), or b) if shadowed parts of the detector are not visible in BKGINIT.cbf when viewed with xds-viewer.


== COLSPOT ==
== COLSPOT ==
Line 29: Line 33:
=== How does one define the beam stop shadow?  Is it possible to develop a simple method, e.g. the ignore circle and ignore rectangle of HKL2000? ===
=== How does one define the beam stop shadow?  Is it possible to develop a simple method, e.g. the ignore circle and ignore rectangle of HKL2000? ===


[http://homes.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xds_parameters.html#VALUE_RANGE_FOR_TRUSTED_DETECTOR_PIXELS= VALUE_RANGE_FOR_TRUSTED_DETECTOR_PIXELS=] masks shaded portions, and variation of its first parameter (values between 6000 and 9000) may be used to obtain the desired result (check with "XDS-Viewer BKGPIX.cbf"). [http://homes.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xds_parameters.html#UNTRUSTED_RECTANGLE= UNTRUSTED_RECTANGLE=] is probably the same as is available in HKL2000. [http://homes.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xds_parameters.html#UNTRUSTED_ELLIPSE= UNTRUSTED_ELLIPSE=] is also available, and there's also [http://homes.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xds_parameters.html#UNTRUSTED_QUADRILATERAL= UNTRUSTED_QUADRILATERAL=]. Furthermore, you could set [http://homes.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xds_parameters.html#MINIMUM_VALID_PIXEL_VALUE= MINIMUM_VALID_PIXEL_VALUE] such that all shaded pixels are below that number, and all background values above it.
[http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#VALUE_RANGE_FOR_TRUSTED_DETECTOR_PIXELS= VALUE_RANGE_FOR_TRUSTED_DETECTOR_PIXELS=] masks shaded portions, and variation of its first parameter (values between 6000 and 9000) may be used to obtain the desired result (check with "XDS-Viewer BKGPIX.cbf"). [http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#UNTRUSTED_RECTANGLE= UNTRUSTED_RECTANGLE=] is probably the same as is available in HKL2000. [http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#UNTRUSTED_ELLIPSE= UNTRUSTED_ELLIPSE=] is also available, and there's also [http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#UNTRUSTED_QUADRILATERAL= UNTRUSTED_QUADRILATERAL=]. Furthermore, you could set [http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_parameters.html#MINIMUM_VALID_PIXEL_VALUE= MINIMUM_VALID_PIXEL_VALUE] such that all shaded pixels are below that number, and all background values above it.


== INTEGRATE ==
== INTEGRATE ==
Line 38: Line 42:
=== Is it possible to exclude images inside the frame range from integration? ===
=== Is it possible to exclude images inside the frame range from integration? ===


just remove them from the directory, or change their name. Or create a directory with symlinks only to those file you want XDS to use.
Use EXCLUDE_DATA_RANGE. It is also possible to just remove them from the directory, or change their name. Or create a directory with symlinks only to those file you want XDS to use.


== CORRECT ==
== CORRECT ==
=== Can one apply the corrections in CORRECT without deleting outliers (i.e. waiting for a later program e.g. SCALA to do outlier rejection). ===
 
=== reducing WFAC1 below its default of 1 improves my data, right? ===
 
Actually, most likely not. Outlier rejection is a tricky business. Reducing WFAC1 ''will'' improve the numbers in the tables, but the merged (averaged) data will most likely be ''worse'' instead of better. In other words, the ''precision'' of the data (this is what the tables report!) may appear better, but the ''accuracy'' (this is what you want!) will degrade.
 
To expand on this: The default of WFAC1=1 usually rejects about 1% of all observations. If even more should be rejected, you should be able to give very good reasons for this: for example ice rings or presence of multiple lattices. 
 
Personally, I have never reduced WFAC1 below 1, but [[2QVO.xds|sometimes]] find that increasing it gives me better data.
 
=== Can one apply the corrections in CORRECT without deleting outliers (i.e. preparing for a later program e.g. SCALA to do outlier rejection)? ===


you could set WFAC1 to a higher value, like 2 (default is 1).
you could set WFAC1 to a higher value, like 2 (default is 1).
Line 91: Line 104:
  xscale_par
  xscale_par
XSCALE.LP now has the table for the I > 3sigma observations (even if the header of the table says that it's for the SIGNAL/NOISE >= -3.0 data)
XSCALE.LP now has the table for the I > 3sigma observations (even if the header of the table says that it's for the SIGNAL/NOISE >= -3.0 data)
== XSCALE ==
=== Just wondering if the XDS_ASCII.HKL file needs to be scaled using XSCALE or its not required? ===
It is not required. XDS_ASCII.HKL is scaled in the CORRECT step. If you want to scale two (or more, even >1000 !) XDS_ASCII.HKL files together, or if you want to use zero-dose extrapolation, or determine resolution shells yourself, then you need [[XSCALE]].


== other questions ==
== other questions ==
 
=== any comments on compatibility with Pilatus detector? ===
=== any comments on compatibility with Pilatus detector? ===


Line 104: Line 123:
=== the URL for detector templates on slide 6 denied my visits ===
=== the URL for detector templates on slide 6 denied my visits ===


http://www.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xds_prepare.html
http://xds.mpimf-heidelberg.mpg.de/html_doc/xds_prepare.html


=== for anomalous dataset, if there is radiation damage, how do I find out where to best cut off the frames for processing? ===
=== for anomalous dataset, if there is radiation damage, how do I find out where to best cut off the frames for processing? ===
Line 131: Line 150:
=== Could you give recommendation for MAD data processing, to get best anomalous?? How to process the 3 data sets with the same orientation matrix, should we use reference datataset? ===
=== Could you give recommendation for MAD data processing, to get best anomalous?? How to process the 3 data sets with the same orientation matrix, should we use reference datataset? ===


First question: short answer was given in the webinar, and see this wiki: [[Optimisation]] and [[Tips and Tricks]]. Second question: yes, you may use a reference dataset - this simplifies getting the right setting in space groups like P3, P4 and so on.  
First question: Scale them in a single XSCALE run, using one OUTPUT_FILE for each wavelength. See also this wiki: [[Optimisation]] and [[Tips and Tricks]]. Second question: yes, you may use a REFERENCE_DATA_SET - this simplifies getting the right setting in space groups like P3, P4 and so on.


== See also ==
== See also ==


[[Problems]]
[[Problems]]
2,684

edits