SSX: Difference between revisions

Jump to navigation Jump to search
497 bytes added ,  5 August 2019
m
no edit summary
No edit summary
mNo edit summary
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
This article deals with how to process serial synchrotron crystallography (SSX) data.  
This article deals with how to process serial synchrotron crystallography (SSX) data.  


The particular data we are processing are artificial and were prepared by James Holton. The files Illuin_microfocus_minimal_00[1-3].tar.bz2 can be [http://bl831.als.lbl.gov/example_data_sets/tarballs downloaded] and the data and problem are described on his [http://bl831.als.lbl.gov/~jamesh/challenge/microfocus microfocus challenge page].
The particular data we are processing are artificial and were prepared by James Holton. The files Illuin_microfocus_minimal_00[1-3].tar.bz2 can be [http://bl831.als.lbl.gov/example_data_sets/tarballs downloaded] and the data and problem are described on his [http://bl831.als.lbl.gov/~jamesh/challenge/microfocus microfocus challenge page], and in a [http://journals.iucr.org/d/issues/2019/02/00/ba5297/index.html paper].


The challenges are
The challenges are
# partial data sets: each of the 100 data sets has only 3 frames of 1° oscillation
# partial data sets: each of the 100 data sets has only 3 good frames of 1° oscillation; later frames have strong radiation damage
# strong radiation damage: the crystals decay to about 1/2 within these 3 frames
# the crystals decay to about 1/2 within these 3 frames
# the b and c axes are the same length, but the crystals are orthorhombic. This makes it difficult to index them consistently - it is wrong to just merge them, because that yields a pseudo-tetragonal merged data set.
# the b and c axes are the same length, but the simulated crystals are orthorhombic. This makes it difficult to index them consistently - it is wrong to just merge them in a orthorhombic space group without resolving the indexing ambiguity, because that yields a pseudo-tetragonal twinned merged data set.


== Round 1: processing the data, and determining the space group ==
== Round 1: processing the data, and determining the space group ==
Line 126: Line 126:
REIDX=0 1 0 0  0 0 1 0  1 0 0 0
REIDX=0 1 0 0  0 0 1 0  1 0 0 0
</pre>
</pre>
where the last line takes care of the shuffling of axes into the order k,l,h, , and obtain
where the last line takes care of the shuffling of axes into the order k,l,h, (after all, the XDS_ASCII.HKL are in P1 with  a,b,c of 38.3,79.1,79.1) , and obtain
<pre>
<pre>
  SUBSET OF INTENSITY DATA WITH SIGNAL/NOISE >= -3.0 AS FUNCTION OF RESOLUTION
  SUBSET OF INTENSITY DATA WITH SIGNAL/NOISE >= -3.0 AS FUNCTION OF RESOLUTION
Line 160: Line 160:
[[File:1g1c-94.png]]
[[File:1g1c-94.png]]


(If the space group were correct, the result of xscale_isocluster should look similar to this:
(If the space group were correct, the result of [[xscale_isocluster]] should look similar to this:


[[File:Lyso-xscale-isocluster.png]]
[[File:Lyso-xscale-isocluster.png]]
Line 184: Line 184:
[[File:Coot.png]]  
[[File:Coot.png]]  


and thus reveals two well separated clouds, corresponding to the two possible indexing modes of the data in space group 19.
and thus reveals two well separated clouds, corresponding to the two possible indexing modes of the data in an orthorhombic space group.


Using XSCALE.1.INP with its 51 XDS_ASCII.HKL, and FRIEDEL'S_LAW=TRUE, we get
Using XSCALE.1.INP with its 51 XDS_ASCII.HKL, and FRIEDEL'S_LAW=TRUE, we get
Line 218: Line 218:
and find that data sets 1 and 17 are wrongly included in the cloud of 51 data sets. Thus they are removed manually from XSCALE.INP.  
and find that data sets 1 and 17 are wrongly included in the cloud of 51 data sets. Thus they are removed manually from XSCALE.INP.  


After xscale_isocluster -dim 2 -clu 1 ,
After <code>xscale_isocluster -dim 2 -clu 1</code> ,
  coot iso.pdb
  coot iso.pdb
now reveals a single cloud:
now reveals a single cloud:
Line 267: Line 267:
thus we now know the spacegroup.
thus we now know the spacegroup.


== Round 2: using the REFERENCE_DATA_SET ==
== Round 2: using the REFERENCE_DATA_SET obtained from one cluster==


The processing script integrate.rc is changed a bit, to a) use the REFERENCE_DATA_SET, b) prevent adjustment of variances by CORRECT (this should be done by XSCALE) , c) allow some radiation damage correction in XSCALE:
The processing script integrate.rc is changed a bit, to a) use the REFERENCE_DATA_SET, b) prevent adjustment of variances by CORRECT (this should rather be done by XSCALE) , c) allow some radiation damage correction in XSCALE:
<pre>
<pre>
#!/bin/bash -f
#!/bin/bash -f
Line 332: Line 332:
The substructure (locating 4 Se with anom data to 3Å) and structure (198 residues) can now easily be solved with [[ccp4com:SHELX C/D/E|hkl2map]]:
The substructure (locating 4 Se with anom data to 3Å) and structure (198 residues) can now easily be solved with [[ccp4com:SHELX C/D/E|hkl2map]]:


== Result ==
=== SHELXC: anomalous CC<sub>1/2</sub> ===
[[File:Cc12ano.png]]
[[File:Cc12ano.png]]
 
=== SHELXD: CCall ''versus'' CCweak, and histogram ===
[[File:Ccallcsccweak.png]]  
[[File:Ccallcsccweak.png]]  


[[File:Histcfom.png]]  
[[File:Histcfom.png]]  
 
=== SHELXE: contrast versus cycle, and PDB with structure ===
[[File:Contrastvscycle.png]]  
[[File:Contrastvscycle.png]]  


2,652

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.

Navigation menu