Simulated-1g1c: Difference between revisions

Line 724: Line 724:
=== Finally solving the structure ===
=== Finally solving the structure ===


After thinking about the most likely way that James Holton used to produce the simulated data, I hypothesized that within each frame, the radiation damage is most likely constant, and that there is a jump in radiation damage from frame 1 to 2. Unfortunately for this scenario, the scaling algorithm in CORRECT and XSCALE was changed for the version of Dec-2010, such that it produces best results when the changes are smooth. Therefore, I tried the penultimate version of XSCALE - and indeed that gives significantly better results:
After thinking about the most likely way that James Holton used to produce the simulated data, I hypothesized that within each frame, the radiation damage is most likely constant, and that there is a jump in radiation damage from frame 1 to 2. Unfortunately for this scenario, the scaling algorithm in CORRECT and XSCALE was changed for the version of Dec-2010, such that it produces best results when the changes are smooth. Therefore, I tried the penultimate version (May-2010) of XSCALE - and indeed that gives significantly better results:


       NOTE:      Friedel pairs are treated as different reflections.
       NOTE:      Friedel pairs are treated as different reflections.
Line 755: Line 755:


Using these data (stored in [ftp://turn5.biologie.uni-konstanz.de/pub/xds-datared/1g1c/xscale.oldversion]), I was finally able to solve the structure (see screenshot below) - SHELXE traced 160 out of 198 residues. All files produced by SHELXE are in [ftp://turn5.biologie.uni-konstanz.de/pub/xds-datared/1g1c/shelx].
Using these data (stored in [ftp://turn5.biologie.uni-konstanz.de/pub/xds-datared/1g1c/xscale.oldversion]), I was finally able to solve the structure (see screenshot below) - SHELXE traced 160 out of 198 residues. All files produced by SHELXE are in [ftp://turn5.biologie.uni-konstanz.de/pub/xds-datared/1g1c/shelx].
[[File:1g1c-shelxe.png]]
[[File:1g1c-shelxe.png]]
It is worth mentioning that James Holton confirmed that my hypothesis is true; he also mentions that this approach is a good approximation for a multi-pass data collection.  
 
However, generally the smooth scaling gives better results than the previous method of assigning the same scale factor to all reflections of a frame; in particular, it correctly treats those reflections near the border of two frames.  This example shows that it is important to
It is worth mentioning that James Holton confirmed that my hypothesis is true; he also says that this approach is a good approximation for a multi-pass data collection.
 
However, generally (i.e. for real data) the smooth scaling (which also applies to absorption correction and detector modulation) gives better results than the previous method of assigning the same scale factor to all reflections of a frame; in particular, it correctly treats those reflections near the border of two frames.   
 
Phenix.refine against these data gives:
Start R-work = 0.3449, R-free = 0.3560
Final R-work = 0.2194, R-free = 0.2469
which is only 0.15%/0.10% better in R-work/R-free than the previous best result (see above).
 
This example shows that it is important to
* have the best data available if a structure is difficult to solve
* have the best data available if a structure is difficult to solve
* know the options (programs, algorithms)
* know the options (programs, algorithms)
* know as much as possible about the experiment
* know as much as possible about the experiment
2,652

edits