2QVO.xds: Difference between revisions

2,115 bytes added ,  14 March 2011
no edit summary
No edit summary
Line 345: Line 345:
* STRICT_ABSORPTION_CORRECTION=TRUE - this is useful if the chi^2 -values of the three scaling steps in CORRECT.LP are 1.5 and higher which is not the case here. Consequently this also had no clear effect.
* STRICT_ABSORPTION_CORRECTION=TRUE - this is useful if the chi^2 -values of the three scaling steps in CORRECT.LP are 1.5 and higher which is not the case here. Consequently this also had no clear effect.
* increasing MAXIMUM_ERROR_OF_SPOT_POSITION from its default of 3 to ( 3 * STANDARD DEVIATION OF SPOT POSITION (PIXELS)) which would mean increasing to 5 here: no clear effect.
* increasing MAXIMUM_ERROR_OF_SPOT_POSITION from its default of 3 to ( 3 * STANDARD DEVIATION OF SPOT POSITION (PIXELS)) which would mean increasing to 5 here: no clear effect.
* increasing WFAC1 : this was suggested by the number of misfits which is clearly higher than the usual 1 % of observations. WFAC1=1.5 has indeed a very positive effect on SHELXD: for dataset 1, the best CC All/Weak becomes 44.93 / 22.82 (dataset 2: 48.11 / 27.78), and the number of successful trials goes from about 60% to 91% (dataset 2: 94%).''' One should note that all internal quality indicators get worse when increasing WFAC1 - but the external ones got significant better!''' The number of misfits with WFAC1=1.5 dropped to 196 / 436 for datasets 1 and 2, respectively.
* increasing WFAC1 : this was suggested by the number of misfits which is clearly higher than the usual 1 % of observations. WFAC1=1.5 has indeed a very positive effect on SHELXD: for dataset 1, the best CC All/Weak becomes '''44.93 / 22.82''' (dataset 2: '''48.11 / 27.78'''), and the number of successful trials goes from about 60% to 91% (dataset 2: 94%).''' One should note that all internal quality indicators get worse when increasing WFAC1 - but the external ones got significant better!''' The number of misfits with WFAC1=1.5 dropped to 196 / 436 for datasets 1 and 2, respectively.
* MERGE=FALSE vs MERGE=TRUE in XDSCONV.INP: after finding out about WFAC1 I tried MERGE=FALSE (the default !) and it turned out to be a bit better - best CC All/Weak '''48.66 / 28.05''' for dataset 2. On the other hand, the number of successful trials went down to 77% (from 94%). This result is somewhat difficult to interpret, but I like MERGE=TRUE better.


We may thus conclude that in this case the rejection of misfits beyond the target value of 1% reduces data quality significantly. If no successful trials are made by SHELXD it may be worth to always try WFAC1=1.5 if the number of misfits is high.
We may thus conclude that in this case the rejection of misfits beyond the target value of 1% reduces data quality significantly. In (other) desperate cases, if no successful trials are made by SHELXD it may be worth to always try WFAC1=1.5 provided the number of misfits is high.
 
We also learn that it's usually ''not'' going to help much to deviate from the defaults (MERGE=, MAXIMUM_ERROR_OF_SPOT_POSITION=, STRICT_ABSORPTION_CORRECTION=) unless there is a clear reason (high number of misfits) to!


===structure solution===
===structure solution===
Line 353: Line 356:
The resolution limit for SHELXD could be varied. For SHELXE, the solvent content could be varied, and the number of autobuilding cycles, and probably also the high resolution cutoff. Furthermore, it would be advantageous to "re-cycle" the file j.hat to j_fa.res, since the heavy-atom sites from SHELXE are more accurate than those from SHELXD, as the phases derived from the poly-Ala traces are quite good (compare the density columns of the two consecutive heavy-atom lists!).
The resolution limit for SHELXD could be varied. For SHELXE, the solvent content could be varied, and the number of autobuilding cycles, and probably also the high resolution cutoff. Furthermore, it would be advantageous to "re-cycle" the file j.hat to j_fa.res, since the heavy-atom sites from SHELXE are more accurate than those from SHELXD, as the phases derived from the poly-Ala traces are quite good (compare the density columns of the two consecutive heavy-atom lists!).


==Limits==
With the optimally-reduced dataset 2, I get from SHELXE:
Density (in map sigma units) at input heavy atom sites
  Site    x        y        z    occ*Z    density
    1  0.3361  0.9695  0.9827  16.0000    24.15
    2  0.3708  1.1540  1.0380  14.5216    17.48
    3  0.1576  1.2210  1.1222  9.2848    12.60
    4  0.4807  1.1304  1.0314  7.2224    8.95
    5  0.4539  1.1750  1.0368  6.6224    7.26
Site    x      y      z  h(sig) near old  near new
  1  0.3380  0.9687  0.9828  24.3  1/0.11  6/2.40 2/10.33 4/11.42 4/11.81
  2  0.3732  1.1546  1.0426  18.1  2/0.23  5/4.00 4/5.67 6/9.92 1/10.33
  3  0.1637  1.2180  1.1226  13.5  3/0.36  2/12.06 5/15.47 6/15.97 1/17.12
  4  0.4784  1.1371  1.0333  9.3  4/0.38  5/2.89 2/5.67 1/11.42 1/11.81
  5  0.4439  1.1791  1.0300  9.0  5/0.64  4/2.89 2/4.00 6/12.54 1/12.64
  6  0.3273  0.9734  1.0393  -5.9  1/2.38  1/2.40 2/9.92 4/11.82 4/11.86
 
so the density is better, but not much. Furthermore, we note in passing that the number of anomalous scatterers (5) matches the sum of 4 Met and 1 Cys in the sequence.
 
==Going to the limits==
 
With dataset 2, I tried to use the first 270 frames and could indeed solve the structure using the above SHELXC/D/E approach (with WFAC1=1.5) - 85 residues in a single chain, with "CC for partial structure against native data =  47.51 %". It should be mentioned that I also tried this in November 2009, and it didn't work with the version of XDS available then!


With dataset 2, I tried to use 270 frames but could not solve the structure using the above SHELXC/D/E approach (not even with MAXIMUM_ERROR_OF_SPOT_POSITION=6.0). With 315 frames, it was possible.
With 180 frames, it was possible to get a complete model by (twice) re-cycling the j.hat file to j_fa.res. '''This means that the structure can be automatically solved just from the first 180 frames of dataset 2!'''
2,652

edits