Xscale: Difference between revisions

Jump to navigation Jump to search
1,541 bytes removed ,  22 October 2019
m
fix links
m (fix links)
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Simple and advanced usage ==
== Simple and advanced usage ==


[http://www.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xscale_parameters.html XSCALE] is the stand-alone scaling program of the XDS suite. It scales reflection files (typically called XDS_ASCII.HKL) produced by XDS. Since the CORRECT step of XDS ''already scales'' an individual dataset, XSCALE is only ''needed'' if several datasets should be scaled relative to another. However, it does not deterioriate (over-fit) a dataset if it is "scaled again" in XSCALE, since the supporting points of the scalefactors are at the same positions in detector and batch space.  
[http://xds.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/xscale_parameters.html XSCALE] is the stand-alone scaling program of the XDS suite. It scales reflection files (typically called XDS_ASCII.HKL) produced by XDS. Since the CORRECT step of XDS ''already scales'' an individual dataset, XSCALE is only ''needed'' if several datasets should be scaled relative to another. However, it does not deterioriate (over-fit) a dataset if it is "scaled again" in XSCALE, since the supporting points of the scalefactors are at the same positions in detector and batch space.  


One advantage of using XSCALE for a single dataset is that the user can specify the number and limits of the resolution shells. Another is that zero-dose extrapolation can be done.
One advantage of using XSCALE for a single dataset is that the user can specify the number and limits of the resolution shells. Another is that zero-dose extrapolation can be done.
   
   
At the XDS website, there is a short and a long commented example of [http://www.mpimf-heidelberg.mpg.de/~kabsch/xds/html_doc/INPUT_templates/XSCALE.INP XSCALE.INP]
At the XDS website, there is a short and a long commented example of [http://xds.mpimf-heidelberg.mpg.de/html_doc/INPUT_templates/XSCALE.INP XSCALE.INP]


----
----
Line 20: Line 20:
  FRIEDEL'S_LAW=FALSE   
  FRIEDEL'S_LAW=FALSE   
  STRICT_ABSORPTION_CORRECTION=TRUE        ! see XDSwiki:Tips_and_Tricks
  STRICT_ABSORPTION_CORRECTION=TRUE        ! see XDSwiki:Tips_and_Tricks
  INPUT_FILE= ../fae-rh/xds_2/XDS_ASCII.HKL
! the star in front of the file name indicates that it is the reference wrt falloff
  INPUT_FILE= *../fae-rh/xds_2/XDS_ASCII.HKL
  FRIEDEL'S_LAW=FALSE
  FRIEDEL'S_LAW=FALSE
  STRICT_ABSORPTION_CORRECTION=TRUE
  STRICT_ABSORPTION_CORRECTION=TRUE
Line 56: Line 57:
* DOSE_RATE=                    ! (optional for radiation damage correction f.i.r.)
* DOSE_RATE=                    ! (optional for radiation damage correction f.i.r.)
* 0-DOSE_SIGNIFICANCE_LEVEL=    ! (optional for radiation damage correction f.i.r.)
* 0-DOSE_SIGNIFICANCE_LEVEL=    ! (optional for radiation damage correction f.i.r.)
* SAVE_CORRECTION_IMAGES=        ! Default is TRUE. If FALSE, don't write DECAY*.cbf MODPIX*.cbf ABSORP*.cbf


== Radiation damage correction ==
== Radiation damage correction ==
Line 100: Line 102:
Another example: by defining STARTING_DOSE=-78.* one would tell the program to calculate, by interpolation, those intensity values that correspond to those that would be obtained near frame 78.
Another example: by defining STARTING_DOSE=-78.* one would tell the program to calculate, by interpolation, those intensity values that correspond to those that would be obtained near frame 78.


== Scaling many datasets ==


The program has no internal limit for the number of datasets. However, many items are calculated for ''each pair of datasets''. This results in some component of the CPU time being quadratic in the number of datasets. Nevertheless, it is perfectly possible to scale >1000 datasets.


If this is too slow, one could use an ''incremental'' way:
== Scaling many datasets ==
 
When scaling e.g. hundreds of partial datasets, XSCALE may finish with an error message !!! ERROR !!! INSUFFICIENT NUMBER OF COMMON STRONG REFLECTIONS . This usually indicates that one or more datasets have too few reflections. Please inspect the table  
# Scale the first (say) 100 datasets together, including any low-resolution datasets
# For the next step, use the OUTPUT_FILE from the previous step as the first INPUT_FILE (with WFAC1=2), and the next (say) 99 datasets as further INPUT_FILEs.
# Go to step 2. as long as you have more datasets
 
This should reduce wallclock requirements, and simply keeps adding datasets to an ever-growing merged dataset. The nice thing is that you can monitor how the completeness keeps growing with each step, and once good completeness is obtained, CC1/2 should start growing.
 
Example showing the principle:
<nowiki>
# first step
cat <<EOF>XSCALE.INP
OUTPUT_FILE=1to100.ahkl
INPUT_FILE=../1/XDS_ASCII.HKL
INPUT_FILE=../2/XDS_ASCII.HKL
! insert lines for INPUT_FILEs 3..100
EOF
xscale_par
mv XSCALE.LP XSCALE_1to100.LP
# second step
cat <<EOF>XSCALE.INP
OUTPUT_FILE=1to200.ahkl
INPUT_FILE=1to100.ahkl
WFAC1=2    ! avoid rejecting more outliers
INPUT_FILE=../101/XDS_ASCII.HKL
INPUT_FILE=../102/XDS_ASCII.HKL
! insert lines for INPUT_FILEs 103..200
EOF
xscale_par
mv XSCALE.LP XSCALE_1to200.LP
# third step
cat <<EOF>XSCALE.INP
OUTPUT_FILE=1to300.ahkl
INPUT_FILE=1to200.ahkl
WFAC1=2  ! avoid rejecting more outliers
INPUT_FILE=../201/XDS_ASCII.HKL
INPUT_FILE=../202/XDS_ASCII.HKL
! insert lines for INPUT_FILEs 203..300
EOF
xscale_par
mv XSCALE.LP XSCALE_1to300.LP
...
</nowiki>
 
=== Possible problems in scaling many datasets ===
XSCALE may finish with an error message !!! ERROR !!! INSUFFICIENT NUMBER OF COMMON STRONG REFLECTIONS . This usually indicates that one or more datasets have too few reflections. Please inspect the table  
  <nowiki>
  <nowiki>
DATA    MEAN      REFLECTIONS        INPUT FILE NAME
DATA    MEAN      REFLECTIONS        INPUT FILE NAME
2,652

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.

Navigation menu