Comparison of home source and synchrotron properties: Difference between revisions

From CCP4 wiki
Jump to navigation Jump to search
(Created page with ''''This is James Holton's explanation - too good to be buried in the CCP4BB archives:''' ''Date: Tue, 12 Oct 2010 09:04:05 -0700'' ''From: James Holton <jmholton@LBL.GOV>'' ''…')
(No difference)

Revision as of 13:10, 13 October 2010

This is James Holton's explanation - too good to be buried in the CCP4BB archives:

Date: Tue, 12 Oct 2010 09:04:05 -0700

From: James Holton <jmholton@LBL.GOV>

Re: Re: Lousy diffraction at home but fantastic at the synchrotron?

There are a few things that synchrotron beamlines generally do better than "home sources", but the most important are flux, collimation and absorption.

Flux is in photons/s and simply scales down the amount of time it takes to get a given amount of photons onto the crystal. Contrary to popular belief, there is nothing "magical" about having more photons/s: it does not somehow make your protein molecules "behave" and line up in a more ordered way. However, it does allow you to do the equivalent of a 24-hour exposure in a few seconds (depending on which beamline and which home source you are comparing), so it can be hard to get your brain around the comparison.

Collimation, in a nutshell, is putting all the incident photons through the crystal, preferably in a straight line. Illuminating anything that isn't the crystal generates background, and background buries weak diffraction spots (also known as high-resolution spots). Now, when I say "crystal" I mean the thing you want to shoot, so this includes the "best part" of a bent, cracked or otherwise inhomogeneous "crystal". The amount of background goes as the square of the beam size, so a 0.5 mm beam can produce up to 25 times more background than a 0.1 mm beam (for a fixed spot intensity).

Also, if the beam has high "divergence" (the range of incidence angles onto the crystal), then the spots on the detector will be more spread out than if the beam had low divergence, and the more spread-out the spots are the easier it is for them to fade into the background. Now, even at home sources, one can cut down the beam to have very low divergence and a very small size at the sample position, but this comes at the expense of flux.

Another tenant of "collimation" (in my book) is the DEPTH of non-crystal stuff in the primary x-ray beam that can be "seen" by the detector. This includes the air space between the "collimator" and the beam stop. One millimeter of air generates about as much background as 1 micron of crystal, water, or plastic. Some home sources have ridiculously large air paths (like putting the backstop on the detector surface), and that can give you a lot of background. As a rule of thumb, you want you air path in mm to be less than or equal to your crystal size in microns. In this situation, the crystal itself is generating at least as much background as the air, and so further reducing the air path has diminishing returns. For example, going from 100 mm air and 100 um crystal to completely eliminating air will only get you about a 40% reduction in background noise (it goes as the square root).

Now, this rule of thumb also goes for the "support" material around your crystal: one micron of cryoprotectant generates about as much background as one micron of crystal. So, if you have a 10 micron crystal mounted in a 1 mm thick drop, and manage to hit the crystal with a 10 micron beam, you still have 100 times more background coming from the drop than you do from the crystal. This is why in-situ diffraction is so difficult: it is hard to come by a crystal tray that is the same thickness as the crystals.

Absorption differences between home and beamline are generally because beamlines operate at around 1 A, where a 200 um thick crystal or a 200 mm air path absorbs only about 4% of the x-rays, and home sources generally operate at CuKa, where the same amount of crystal or air absorbs ~20%. The "absorption correction" due to different paths taken through the sample must always be less than the total absorption, so you can imagine the relative difficulty of trying to measure a ~3% anomalous difference.

Lower absorption also accentuates the benefits of putting the detector further away. By the way, there IS a good reason why we spend so much money on large-area detectors. Background falls off with the square of distance, but the spots don't (assuming good collimation!).

However, the most common cause of drastically different results at synchrotron vs at home is that people make the mistake of thinking that all their crystals are the same, and that they prepared them in the "same" way. This is seldom the case! Probably the largest source of variability is the cooling rate, which depends on the "head space" of cold N2 above the liquid nitrogen you are plunge-cooling in (Warkentin et al. 2006).

-James Holton MAD Scientist