FAQ

Revision as of 18:29, 3 October 2015 by Kay (talk | contribs) (/* I can see many diffraction spots in the BKGINIT.cbf when the BACKGROUND_RANGE is set default (first 5 degree images). When this value is set large (eg. 50 degree), no diffraction spots can be seen. It seems that this is a "real" background image. Bu...)

These are answers to questions that were asked concerning XDS, on the occasion of the webinar on Feb 25, 2010 (available at http://www.rigaku.com/node/1379 or http://www.rigaku.com/downloads/webinars/kay-diederichs/), or in emails. If possible, they are grouped here with the appropriate processing step.

INIT

The background range is defined as first 5 degrees by default. Is this an assumption that the background (used in Integrate?) remains consistent over the entire dataset valid? The spots over the entire dataset still have background corrected based on first 5 degrees?

No. The INIT step uses the first 5 degrees (by default) for a number of purposes (check out the files written by INIT !). BKGINIT.cbf is essentially used only for scaling purposes; the real background calculation just requires those frames which have the reflections that are integrated.

I can see many diffraction spots in the BKGINIT.cbf when the BACKGROUND_RANGE is set default (first 5 degree images). When this value is set large (eg. 50 degree), no diffraction spots can be seen. It seems that only the large value produces a "real" background image. But both these two scenarios produce nearly the same results (data statistics). Do I need to always change the BACKGROUND_RANGE so that there are no diffraction spots in the BKGINIT.cbf?

No. Again, BKGINIT.cbf is used only for scaling purposes and masking the beamstop (in DEFPIX); the real background calculation just requires those frames which have the reflections that are integrated. Normally you don't worry about BACKGROUND_RANGE; just use the default! You need only increase the BACKGROUND_RANGE if you have so low background that a) even 5° of data produces many zero pixels in BKGINIT.cbf (but this is highly unlikely), or b) if shadowed parts of the detector are not visible in BKGINIT.cbf when viewed with xds-viewer.

COLSPOT

All my crystals are split, and HKL2000 has a hard time indexing the unit cell using sequential frames. However, iMOSFLM does a really good job indexing using orthogonal frames. Can XDS index the unit cell using orthogonal frames?

yes. You can define several SPOT_RANGE= keyword/parameter pairs, e.g. SPOT_RANGE=1 1 SPOT_RANGE=90 90 . Most likely it will be even better with SPOT_RANGE= 1 90 . Also see Indexing.

IDXREF

What is LATTICE-CHARACTER in IDXREF.LP and CORRECT.LP ?

the definitive answer is in 9.2.5. Lattice characters, in International Tables A.

How accurate need to be the ORGX, ORGY? Can XDS optimize the beam position?

ideally, the error should be less than half the minimum spot separation. If the error is larger than that, you'll have to inspect the table in IDXREF.LP which investigates alternative origin positions. XDS does optimize the beam position (or rather the beam direction, but that determines the beam position).

can I use xdisp (part of denzo) to find out the beamstop position and use it for xds?

yes. Unfortunately, the conventions of the different data reduction programs are not the same, but there is always a simple transformation between them, like x' = y; y' = x (which often works!) or some such. The transformation may be different for different detectors. For a given detector, you can easily find the transformation out for yourself, by noting the values from xdisp and comparing them with those from adxv or XDS-Viewer. After that, you can always find the beamstop position with xdisp, and put the transformed values into XDS.INP .

DEFPIX

How does one define the beam stop shadow? Is it possible to develop a simple method, e.g. the ignore circle and ignore rectangle of HKL2000?

VALUE_RANGE_FOR_TRUSTED_DETECTOR_PIXELS= masks shaded portions, and variation of its first parameter (values between 6000 and 9000) may be used to obtain the desired result (check with "XDS-Viewer BKGPIX.cbf"). UNTRUSTED_RECTANGLE= is probably the same as is available in HKL2000. UNTRUSTED_ELLIPSE= is also available, and there's also UNTRUSTED_QUADRILATERAL=. Furthermore, you could set MINIMUM_VALID_PIXEL_VALUE such that all shaded pixels are below that number, and all background values above it.

INTEGRATE

What about overlapping reflections in XDS?

If I understand your question correctly: reflections that occur on the same position of the detector, and on the same frame, are not deconvoluted by XDS (there are programs which may be able to do this but I have not used them so far, and cannot comment on them). Reflections that are either spatially (x,y) or rotationally (in phi) separated (even if it's only a small distance) are not a problem, they are treated quite adequately by XDS (read about the pixel-labelling method!)

Is it possible to exclude images inside the frame range from integration?

just remove them from the directory, or change their name. Or create a directory with symlinks only to those file you want XDS to use.

CORRECT

reducing WFAC1 below its default of 1 improves my data, right?

Actually, most likely not. Outlier rejection is a tricky business. Reducing WFAC1 will improve the numbers in the tables, but the merged (averaged) data will most likely be worse instead of better. In other words, the precision of the data (this is what the tables report!) may appear better, but the accuracy (this is what you want!) will degrade.

To expand on this: The default of WFAC1=1 usually rejects about 1% of all observations. If even more should be rejected, you should be able to give very good reasons for this: for example ice rings or presence of multiple lattices.

Personally, I have never reduced WFAC1 below 1, but sometimes find that increasing it gives me better data.

Can one apply the corrections in CORRECT without deleting outliers (i.e. preparing for a later program e.g. SCALA to do outlier rejection)?

you could set WFAC1 to a higher value, like 2 (default is 1).

Is there a way to automatically set the high resolution limit based on an I/sigma cutoff?

no. Different purposes require different cutoffs: refinement and MR may be done with data down to 1 or 2 sigma; sulfur SAD needs 30 sigma for finding the S sites.

How do you interpret the DECAY.cbf file?

Using XDS-Viewer: along the x direction, you find the "integration batches" (first frames to the left; last frames to the right). Along the y direction, you find the resolution ranges (bottom: low res; top: high res). The shades of gray correspond to numbers. Move the mouse across them to find the numbers which are simply the scalefactors multiplied with 1000.

It would be nice to have a Table with Rfactors for each image to identify xtal damage. I meant Table with Ractor over Image in CORRECT.LP

XDSSTAT produces this table.

how to identify whether the data processed had a good anomalous signal?

look at anomalous correlation: if it is > 90% at low resolution it is definitely good. Down to 30% the signal is useful, according to T. Schneider and G. Sheldrick (2002).

What is the best way to create mtz files? With the Xscale/xdsconv I often obtain intensity values that are marked *** stars - probably overshooting a formatting limit.

do you mean the *** stars appear in the output of mtzdump? That would be a problem of mtzdump output only, and irrelevant for the use of the MTZ file. The XDS/XSCALE/XDSCONV route should not produce these stars (the proper Fortran formats are used throughout).

How does one check you've got the ice-rings excluded correctly ? (ie is there a visual way of checking?)

look at the Wilson plot, and the list of outliers at the bottom of CORRECT.LP.

why do the latest XDS/XSCALE versions only give a single table, with I/sigma>=-3 cutoff?

This was changed in the May 2010 version. There is no keyword to get the old version of the tables back.

Short explanation: the reason why this has been changed is that only the Signal/noise>=-3 table is meaningful because it describes the data that are used "downstream" for structure solution, and refinement. These are also the numbers that should go into "Table 1" of your paper. The -3*sigma cutoff is used to ensure that strongly negative intensities - which of course should not exist - do not enter downstream steps.

Longer explanation: the -3 sigma cutoff is actually imposed by XDSCONV, not by XDS (which is why XDS_ASCII.HKL may also contain intensities that are below -3*sigma). XDSCONV performs the conversion from intensities to amplitudes (like TRUNCATE, it implements French G.S. and Wilson K.S. Acta. Cryst. (1978), A34, 517-525.). If e.g. only positive intensities were written out by XDS, and used by XDSCONV, then this would lead to wrong average intensities at high resolution, wrong conversion to amplitudes, wrong Wilson B factors, and ultimately to worse models.

Would it matter if the cutoff were not -3, but e.g. -4? Simple statistics can tell us what the percentage of reflections with intensity < -3*sigma is in a resolution shell that contains only noise (i.e. far beyond the useful resolution of your crystal): assuming a gaussian distribution of intensities around zero, we expect (100-99.73)/2 % = 0.135% of reflections to be less than -3*sigma, and another 0.135% to be larger than 3*sigma. For a useful resolution shell, however, the expected percentage of reflections with intensity less than -3*sigma would be much less. Thus, changing the cutoff (there is indeed a keyword for that - see XDSCONV documentation) would not matter much.

By the way, SCALEPACK also uses a -3 sigma cutoff (see "SIGMA CUTOFF" at http://www.hkl-xray.com/hkl_web1/hkl/Scalepack_Keywords.html).

A workaround to get just the statistics for the I > 3 sigma observations is the following:

head -100 XDS_ASCII.HKL | grep \! | grep -v END_OF_DATA > temp.hkl
grep -v \! XDS_ASCII.HKL | awk '{if ($4/sqrt($5*$5)>3) print $0}' >> temp.hkl
echo \!END_OF_DATA >> temp.hkl

Warning - the following overwrites any existing XSCALE.INP and XSCALE.LP:

echo OUTPUT_FILE=temp.ahkl > XSCALE.INP
echo INPUT_FILE=temp.hkl >> XSCALE.INP
echo CORRECTIONS= >> XSCALE.INP
xscale_par

XSCALE.LP now has the table for the I > 3sigma observations (even if the header of the table says that it's for the SIGNAL/NOISE >= -3.0 data)

XSCALE

Just wondering if the XDS_ASCII.HKL file needs to be scaled using XSCALE or its not required?

It is not required. XDS_ASCII.HKL is scaled in the CORRECT step. If you want to scale two (or more, even >1000 !) XDS_ASCII.HKL files together, or if you want to use zero-dose extrapolation, or determine resolution shells yourself, then you need XSCALE.

other questions

any comments on compatibility with Pilatus detector?

fully compatible, well tested, daily used (namely at the SLS in Villigen, Switzerland).

I can not find generate_XDS.INP in XDSwiki page

http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/Generate_XDS.INP

the URL for detector templates on slide 6 denied my visits

http://www.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xds_prepare.html

for anomalous dataset, if there is radiation damage, how do I find out where to best cut off the frames for processing?

there are no hard rules. One possibility is to look at the anomalous correlation printed in CORRECT.LP for subsets of the data. If those numbers drop after frame X, you should probably not use frames after X.

if one compares data integrated with the different programs (XDS, mosflm, hkl2000...): is there any difference in the final quality of the data set?

yes, because the programs implement different ideas, the results differ. For good datasets, the differences are minor, but for bad datasets, the differences may be large. Don't rely on anecdotal evidence only - try for yourself!

What about very low resolution data, 5 to 10 A range with the rather high mosaicity of few degrees? Any experience with processing of such data with the XDS?

yes, it should be possible to process these data. You may want to specify

BEAM_DIVERGENCE=  BEAM_DIVERGENCE_E.S.D.= 
REFLECTING_RANGE=  REFLECTING_RANGE_E.S.D.= 

(take the values from INTEGRATE.LP after a first pass) in XDS.INP, and to use REFINE(INTEGRATE)= to refine only those geometrical parameters that actually change.

For challenging datasets, a single pass of data processing is often not enough - you'll have to experiment yourself.

You said that the XDS deals with high mosaicity. How high mosaicity is still manageable?

I don't have exact numbers. Maybe up to ten degrees?

By the way, mosaicity is defined by XDS as the sigma of a Gaussian representing a reflection's rocking curve. The relation between full-width-half-maximum (FWHM, used by MOSFLM for mosaicity definition) and sigma is: FWHM = sigma*2.355 . In other words, if MOSFLM reports a mosaicity of 0.47° and XDS reports 0.2°, then the two programs agree in their opinion about the rocking curve.

Could you give recommendation for MAD data processing, to get best anomalous?? How to process the 3 data sets with the same orientation matrix, should we use reference datataset?

First question: Scale them in a single XSCALE run, using one OUTPUT_FILE for each wavelength. See also this wiki: Optimisation and Tips and Tricks. Second question: yes, you may use a REFERENCE_DATA_SET - this simplifies getting the right setting in space groups like P3, P4 and so on.

See also

Problems