Wednesday, December 30, 2009

Rethinking those first results...

Well, it always pays to check and recheck.  It looks like my original 'weights' on the exposure map file were done incorrectly; the correct method is shown in this thread. Redoing the calculation reduces the total effective area on axis from ~62 cm^2 to more like 46 cm^2, increasing the required flux to get the observed number of counts. 
As a side note, this is a major benefit for keeping a log file (mine are all called 'LOG') that shows all the steps necessary to recreate each major step in an analysis.  This way, when you find a mistake, it's reasonably easy to go back and redo everything without having to re-derive each step. 
So, now I'm re-running mkinstmap (fast) and mkexpmap (very, very slow...)

Tuesday, December 29, 2009

First results...

OK, the first surface brightness fit is ready to go.  There will probably be issues with it down the road, but this was my first shot.  Note that the fit uses the BARE-GR-B model from Zubko, Dwek & Arendt (2004) as the dust model.  There is clearly some problem at small angles - perhaps due to a problem with the PSF, perhaps it's saying something about where the dust is.  This is the point where things get interesting.


Technical: Putting it all together: More on the flux issue

As mentioned previously, my best fit to the HETG flux Fx(1.5-9.5 keV) = 3.7 ph/cm^2/s

The HRC response was calculated using weights from 0.5-9.8 keV, although it drops off to < 5% for E > 5.5 keV.  The flux below 1.5 keV is negligible.

Of course, this value includes the scattered flux since the HETG data are in CC mode; in reality, only 80-85% of this is truly from the source, the other 15-20% is scattered and not seen directly. So we add a correction to make it just be the true direct source rate.  I get this 15-20% value from a halo calculation which gives 5% scattering for 1e22 cm^-2 worth of MRN77-type dust, so for 3-4 x10^22 (exact value unknown) it's 15-20%.  I get slightly lower values for some of the reasonable ZDA04 values.  Taking 15% as 'typical', though, this 3.7 ph/cm^2/s becomes 3.1 ph/cm^2/s of direct flux.

Now, within 2" of the GX5-1, there are 500795 cts.  The 'ONTIME' is 4785 s, for a total count rate is 104.6 cts/s, and this should correspond to 90% of the total direct counts, or to 116.3 cts/s total.  Note that I have not taken deadtime into account yet. The effective area here is 62.2 cm^2 (taken from the exposure map), so that means the flux required to get the total observed count rate is 116.3/62.2 = 1.87 ph/cm^2/s.   This is an OK match with the HETG results.  If the deadtime is 30.5%, that can be treated as either reducing the observation time (which increases the measured count rate). Alternatively, if we wish to compare to the 1.87 ph/cm^2/s, we can simply decreasing the estimated flux we got from other sources.  This then converts 3.1 ph/cm^2/s to 2.1 ph/cm^2/s, which is 12%  higher than the 1.87 value.  That's within the range I'm expecting between the  HETG flux and the HRC flux based on the RXTE ASM data.


So, I will use 1.87 ph/cm^2/s as the 'flux' of my HRC-I source, since I only care about the post-deadtime-corrected value, not the actual flux.

Pulling it all together - the flux

Next up is to normalize the surface brightness by the source flux. This has already been done for the 3C273 observation I'm using for the PSF, but remains to be done for the  GX5-1 observation.  The tricky bit is that the 'source flux' depends both on the time the observation was done (as discussed earlier) and on how it's measured.  Chandra can measure the flux from a point source quite easily and accurately, but RXTE is a non-imaging detector and so a flux measured with RXTE will include both the point source and a significant amount of the 'scattered' flux.  Bringing these two measurements into alignment requires a determination of how much flux is scattered - which is, in fact, what our ultimate goal is.

Comparing previous results, Smith et al. (2006) found Fx(1-10 keV) = (5.2±0.1)x10^-8 erg/cm^2/s for their Chandra/ACIS observation of GX5-1 (ObsID 109), while Ueda et al (2005) found Fx(1-10 keV)=4.3x10^-8 erg/cm^2/s from their HETG data (ObsID 716).   From the fit I did of the new HETG spectrum I got Fx(1-10 keV)=3.7x10^-8 erg/cm^2/s, about 40% less than the ACIS observations and 16% less than the HETG flux.  This variation can be compared to the RXTE all-sky-monitor count rates:

ObsID   Description    1.3-3 keV     5-12.1 keV
                                     cts/s              cts/s
109      ACIS-S            13.4±1.0      35.1±1.3
716      1st HETG        14.2±1.1      32.3±1.3
5888    2nd HETG       12.3±1.3      40.8±2.2
7029    HRC-I              11.2±3.4      31.5±6.2

Of course, this emission includes the direct and scattered flux, so it's not possible to convert these into fluxes directly (using, say, WebPIMMS) and get a sensible comparison.  But the range of count rates here is 25% or less, which for an X-ray binary is practically constant.  It also suggests that while the flux from the 2nd HETG observation might be slighter larger than during the HRC-I observation, the difference is less than 25%.

In fact, plugging the total ASM count rate for the HRC-I observation (63.38 cts/s) into WebPIMMS, along with kT=10.5 keV, NH=4e22 cm^-2 (from Smith et al. 2006), gives an HRC-I count rate (including all scattered photons) of 131 cts/s; adding 55 cts/s for the background gives 186 cts/s, just the telemetry limit.  This is reasonably close to the estimate I got from the HETG fit, although keep in mind that was just the source itself and this is the source + scattered flux so I would have expected it to be larger.  Plus the deadtime measurement from the data itself of 33% suggests this is an underestimate since a true count rate of 186 cts/s should have a lower deadtime than 33%.  Hmm.  Have to think more about this.

Pulling it all together - the Point Spread Function

At this point I have a radial profile for the source, in physical units - ie, units that can be compared to theoretical predictions.  However, there are three components to the surface brightness at any point: (1) the point spread function (PSF) of the mirror, (2) the diffuse background, and (3) the actual scattered X-rays that I care about.  I need to come up with some way to estimate the value of (1) and (2).  As it turns out, the diffuse background (term #2) is basically flat, so I can fit that pretty easily by just adding in a constant term.  However, the PSF (term #1) is energy-sensitive and position-dependent.


There are a number of ways to get Chandra's PSF.  The calibration database contains images of point sources that show the PSF, but these only have information near the source, out to ~10'' or so.  There is also the Chandra Ray Trace code (ChaRT) that can model the Chandra PSF for any set of energies.  The problem there is that ChaRT (also called SAOsac) is tuned up using the same calibration database images, and so it tends to give the wrong answers when looking more than 10'' away from the source.  The problem here is that I need the PSF from about 2'' to about 100'' or even 1000'' away from the source, and the models just won't do that.  I've included Figure 4 from Smith, Edgar & Shafer (2002) to demonstrate the problem.  This figure compares observations of Her X-1, a nearly-perfect point source compared to the best-possible ChaRT/SAOsac model.  Although near the source all is well, even by 30'' - 40'' away there are problems.

The solution is to use long observations of bright point sources, which inherently measure the PSF.  In this case, the quasar 3C273 observed with the HRC-I (ObsID 461) is about as good as it gets.  I used this source in Smith (2008) when measuring the halo of GX13+1, and it worked reasonably well.  The biggest problem is that the PSF is energy dependent, and the spectrum of 3C273 is not the same as GX13+1 or GX5-1 for that matter.



However, the energy-dependence isn't that great, and there isn't a better choice, so one works with what one can. Of course, trusting statements like that is the road to disaster. So I've attached two figures that prove it.  The first is the PSF measured from Her X-1 (and modelled with SAOsac) fit to a power-law for a range of energies.  The units in the y-axis are 'arcmin^-2', which are really the surface brightness in units of photons/cm^2/s/arcmin^2 divided by the source flux, in units of photons/cm^2/s. This gives the rather strange, but handy, units of arcmin^-2.  This figure shows that the PSF really is pretty well fit with a power law - and that the ChaRT/SAOsac fits are awful far off-axis.  The second figure simply plots the actual fit parameters for the power-law fits. This plot shows that the power-law slope is roughly constant around alpha=1.9, while the amplitude of the PSF increases almost linearly with energy.  The 'bobble' around 2 keV is almost certainly due to the Silicon K edge at that energy, since both the mirrors and the detectors include lots of silicon.  The upshot of all this is that I feel confident that the biggest problem with using 3C273 as a surrogate for the PSF of GX5-1 (and GX9+1 for that matter) will be a slight offset up or down in the total PSF power.  For example, if 3C273 has more emission at high energies than GX5-1, then the predicted PSF from 3C273 will be slightly larger than the 'real' GX5-1 PSF, since the power-law fits show that the PSF gets more intense at higher energies.


This opens up the question: If the power-law fits work so well, why not just use them?  Well, I do, for ACIS halo measurements.  But these energy-dependent fits can only be made using an energy-sensitive detector like the ACIS.  However, ACIS suffers from pileup when observing bright sources, making measurements of the near-source PSF impossible.  Measurements in the 10''-30'' range can only be done well with the HRC, which has no energy resolution.  Fits to the PSF of 3C273 show that this simple power-law fit don't work so well, as this figure shows.  This fit uses a Gaussian model for the core of the PSF, and then two power-laws, one with a slope of 4 (primarily for the 1-10'' region) and a second with a slope of 2.4 for the 10-100''.  There is also a constant term to handle the background.  And, even with all of these terms, the fit isn't all that good at 10''.  So, I think it's best to use real observations rather than fits at this point.

Monday, December 28, 2009

Getting the radial profile

This step is simple in concept, and a pain in execution unless you've got a script that does it all.  Having worked on these halos for a while (check out my CV), I now have a number of these scripts.  Effectively, what one does is determine how many annuli one wants, and then determines the number of counts in each annulus (from the event file) and the effective area as well (from the exposure map).  Dividing the two gives the surface brightness in units of photons/cm^2/s/pixel, which can be converted into any other convenient units later.  Here are the commands I use to make the annulus file:
sherpa
rad = 10^[-0.3:3.0:0.03]
fp = fopen("annuli.dat","w");
() = fprintf(fp,"%f %f\n",0.0,rad[0]);
for (i=0;i
()=fclose(fp);

And then I just use a script I wrote, make_radprof_hrc.sl, to calculate the actual surface brightness:
make_radprof_hrc.sl hrc_evt2.fits srcfree.reg ../expmap 270.28383 -25.07775 annuli.dat
This script assumes that the exposure map is named hrc_expmap.fits.  The central Right Ascension and Declination are given in the command line as 270.28383 and -25.07775; these values were obtained from the event file itself, using ds9.  Note that this position is close to, but not exactly at, the simbad position for GX5-1 (270.284167, -25.079167).  The separation is 5.2'' - which is rather large for Chandra.

This deserves a closer look to see if there is some reason the positioning of the source would be incorrect.  Normally, Chandra positions are better than 1'', so a 5+'' offset is odd.  Now, it's possible simbad is wrong - remember that Chandra has the highest angular resolution of any X-ray satellite, and that GX5-1 is first and foremost an X-ray source.  Fortunately, simbad lists sources, and in particular lists this Ebisawa et al (2003) paper as the source.  Checking this paper, we find that they reference Liu et al (2001) as their source for the position, with the source position accuracy given as 3'' with a caveat that this is only an estimate.  However, Liu et al. reference the paper "Infrared observations of Galactic bulge X-ray sources" (Hertz & Grindlay 1984), which sounds as though it should have quite accurate positions.  Nonetheless, a quick scan of Hertz & Grindlay shows that they got their positions from Einstein observations, and the infrared observations in fact did not detect GX5-1 at all. 

Upshot: The Chandra position is almost certainly superior to the position listed in Simbad.  You can't trust everything on the web, after all...

Annoying little problems...


At this point, it should be possible to simply add up the number of counts in a range of annuli around the source and divide by the total effective area in the same annulus to get the surface brightness as a function of radius R from the source.  However, examining the image shows that there's a problem - there is a 'jet' of emission coming out at around 10 o'clock.  This is a bit easier to see in the 'zoomed' image, but it's definitely there.  The first time I saw this, I thought I'd actually found a jet of X-ray emission coming from the neutron star, something that could happen.  However, in this case it turns out it's a well-known problem with the HRC that 'misaligns' a few counts (less than a percent or so) in a particular direction.  This is normally filtered away, but in high count rate situations like this the filter doesn't work perfectly. The only way to realistically deal with this is to simply exclude that side of the source from the final result.  This doesn't guarantee it won't impact the final result slightly, but it's the best I can do.  Fortunately, the side I'm excluding is facing the edge of the detector so it's a relatively small area and, secondly, there are beaucoup counts in this dataset in the first place, so the impact is limited.

Making an exposure map

The Chandra High Resolution Camera is not equally-sensitive at all positions.  At off-axis positions, the sensitivity drops quite a bit, due to inherent difficulties in building X-ray mirrors.  Making this more challenging, the sensitivity changes are also energy-dependent - at high energies, the loss is larger than at low (soft) ones.  Therefore, to get a true measure of the brightness of the halo independent of these changes, I need to calculate the actual response of the camera to the specific source, in this case GX5-1, at all positions of interest.  Fortunately, this is a common problem, and tools exist to determine the 'exposure map' semi-automatically using the CIAO software package. The tricky part is determining the spectrum, but since I've already done this part, I can reuse my previous result.  The calculation is done in two parts - first, getting the 'instrument map' and then the full 'exposure map.'  The instrument map calculates the effects of the detector and the X-ray mirror on the final sensitivity.  The exposure map is the overall impact on the observed sensitivity.  The difference between the two is because Chandra doesn't sit and stare at one point, but rather 'dithers' around the focus point.  This is done for a number of reasons, but in this case it simply means we have to move the instrument map around and co-add it to get the final impact.  These steps are easy, but take a significant amount of time and memory (hours on a 2 GHz machine and hundreds of MB, respectively).

The image in blue shows the instrument map, with a color scale at bottom.  This is in units of square centimeters, and measures the 'effective area' of the detector at each point to the source.  If the source is emitting 1 photon/sq. cm/s, then if we have 60 sq. cm of effective area, we'll get 60 counts/s from the source.  Note the clever conversion from 'photons' to 'counts' - 'photons' are independent of any real-life detector, while 'counts' are real measured blips with all of the calibration and other issues that are associated with reality.  Also, the maximum sensitivity is in the corner of the detector because this particular observation was done with the detector offset to put the source in the corner.  Normally, the maximum sensitivity (ie, the white spot) would be in the center.  As a side note, this figure can be immediately identified as an instrument map and not a detector map because it's square and aligned on the figure.  By custom, images in sky coordinates have north 'up' and east pointing 'left'.  The exposure map will be in sky coordinates, and will thus show a rotated detector - as shown in the second image.  I've done these in blue and orange, using the SAO ds9 program, but could have used any colors.  Now that I have these, I'm ready to calculate the radial profile of the source - or, well, almost.

Thursday, December 24, 2009

Making a lightcurve


I don't actually care that much about the lightcurve of GX5-1 -- it has no real bearing on this particular data analysis.  However, a lightcurve that shows the number of counts per second coming from the whole field and from my source is almost always the first data product I extract.  The reason is that the lightcurve can show all sorts of bad things that you'd hate to discover late in the game.  There are perfectly good instructions in the Chandra thread, so I shan't repeat them here. 
The lightcurve shows one major, and entirely expected, problem.  The count rate in the full field is practically at the telemetry limit - 184 cts/s for the HRC-I.  This is the fastest rate that Chandra can accumulate data with the HRC-I.  Data that comes in at higher rates are simply dropped on the floor and lost.  An observed rate as high as 170 cts/s doesn't mean we've just skated under the limit -- for a number of reasons, hitting the exact limiting rate is unlikely.  Much more likely is that we've lost a bunch of data and there is no way to get it back.  This isn't a disaster -- at 130 cts/s from the source, and almost 5000 seconds of data, I'm swimming in photons (1 count = 1 X-ray photon), but it does mean I need to keep careful track of just how many fewer photons I've got than I would otherwise expect.  This is called the detector 'deadtime', and in the case of the HRC-I is stored in a file called hrc****_dtf1.fits.  Right now it looks like the average deadtime is about 30.5%, based on the data in this file.  But we'll check that using WebPIMMS, a handy tool that can calculate the expected count rate in almost any X-ray satellite & detector, given an input source model.  I can use the values I got in a previous post, or values from my earlier paper, and get a predicted source count rate.  In this case, I used NH = 2.8e22 cm^-2, a bremsstrahlung model with kT = 10.5 keV and FX(0.3-10) = 2.8e-8 ergs/cm^2/s.  The WebPIMMS result is a prediction of 220 cts/s from GX5-1 alone, not including the rest of the HRC-I.  The lightcurve above, however, shows about 130 cts/s from a 2' radius around GX5-1, and about 40 cts/s from the rest of the HRC-I.  The Chandra Proposer's Guide Document for the HRC lists the HRC-I quiescent background as 1.7e-5 cts/s/arcsec^2.  For a 30x30 arcmin detector, this amounts to 55 cts/s when everything is just 'ticking over'.  So pretty clearly there are some 'issues' here.  On the plus side, adding 220 + 55 = 275 cts/s, and then comparing to the telemetry limit of 184 cts/s suggests that the deadtime should be (approximately) 1 - 184/ 275 = 33%, which compares quite well with the 30.5% number in the dtf file.  Now I just need to keep this in mind as I progress - basically, the source flux and the background have been suppressed by 30-33% due to photons that were 'lost in space'.

Preparing the data

Since the spectrum information seems to be reasonably under control, it's time to move on to processing the actual data.  The exact steps involved in this change reasonably frequently, and despite the fact that we're on the 4th processing of the data already, it may still not be up-to-date.  The place to check this is the Chandra data analysis page, which notes that CIAO 4.2 and CALDB 4.2.0 (the Chandra data analysis package and calibration database, respectively) are now out, and there is a helpful link that tells us what has changed.  At the bottom of the page, this notes that there has been a change for HRC-I (Imaging) data, and that's what this is all about.  So, time to reprocess.

Fortunately, there are a number of 'threads' that explain how to do standard analysis tasks, and there's nothing more 'standard' than reprocessing to pick up the latest calibration.  In Chandra-speak, this is called creating a level 2 event file.  The steps involved are pretty straightforward if you're moderately familiar with Unix.  If not, it's best to become moderately familiar first, as it's going to take a while otherwise.   I tend to create a file called LOG that contains all the steps I did to reprocess the data, starting from a directory I normally call 'repro'.  Given the number of distractions life has, maintaining some personal standards is vital as it may be 3 months or longer between periods of time when I can work on a project.  By always using the same names and formats, I can more easily remember where I've gotten to.  In this case, my LOG file contains the following:

ln -s ../../secondary/hrcf07029_000N004_evt1.fits hrc_evt1.fits

dmkeypar hrc_evt1.fits RANGELEV echo+
#116
#
#  Curious -- not the 115 I would expect from
#
#  http://cxc.harvard.edu/ciao/threads/createL2/index.html#hrc

#  I suspect it's out of date.
#

ln -s ../../primary/pcadf256045785N003_asol1.fits pcad_asol1.fits
ln -s ../../secondary/hrcf07029_000N004_bpix1.fits hrc_bpix1.fits
ln -s ../../secondary/hrcf07029_000N004_std_flt1.fits hrc_std_flt1.fits

punlearn hrc_process_events
hrc_process_events infile=hrc_evt1.fits outfile=hrc_new_evt1.fits \
   badpixfile=hrc_bpix1.fits  acaofffile=pcad_asol1.fits \
   badfile=NONE instrume=hrc-i do_amp_sf_cor=yes


punlearn dmcopy
dmcopy "hrc_new_evt1.fits[status=xxxxxx00xxxx0xxx00000000x0000000]"  \
      hrc_flt_evt1.fits

punlearn dmcopy
dmcopy "hrc_flt_evt1.fits[EVENTS][@hrc_std_flt1.fits][cols -crsu,-crsv,-amp_sf,-av1,-av2,-av3,-au1,-au2,-au3,-raw,-sumamps]" \
      hrc_evt2.fits

You can compare these to the thread, and see how I've 'shortened' the instructions down to the bare minimum. 

Fitting with previous model


I'm starting with GX5-1 as I have some prior experience with the source.  In fact, I published a paper on the dust-scattered halo around GX5-1 in 2006, with the unassuming title "THE X-RAY HALO OF GX 5-1."  That paper used RXTE but in a pointed mode, allowing a good measurement of the X-ray spectrum.  I decided to compare that fit to the current HETG data; see image for the result.  Pretty clearly, the fit is decent -- especially since it was done on an observation done 5+ years before this one.  This suggests that the poor fit at high and low energies might well be due to problems with the HETG calibration (not surprising given how bright GX5-1 is), rather than a model failure.  However, we need to remember that RXTE has little or not response below 2 keV, so it's possible that RXTE simply didn't see the low-energy emission.  However, RXTE is quite good at high energies, so the poor fit there is likely due to HETG issues.

Technical: HETG CC mode offset spectrum

[This is a post with a number of technical details.  Skippable.]

The HETG observation was done in CC mode, with the zero order offset so as to be barely on the CCD.  As a result, the MEG -1 and HEG +1 orders are largely off the ACIS-S array, and there is little data there.  The HEG-1 and MEG+1 orders seem to be OK, though -- note that these are rows 3 and 10 in the standard pha2 file.  They also seem to have data that agree with each other, unlike adding in the semi-data in the other orders.  So I'm going to focus on these two.

Fitting them with a tbabs*diskbb model gives a decent, if not great, fit (chi^2 ~ 1.8) between 1.5-6 keV:
nH     10^22 cm^-2   3.17742      +/-  3.18579E-03
Tin        keV    2.12996      +/-  3.11115E-03
norm               114.390      +/-  0.645348    

flux(0.3-10 keV) = 2.705e-08 ergs/cm^2/s



However, as the spectrum shows, the fit is pretty bad below 1.5 keV and above 6 keV.  There isn't that much flux above 6 keV, and it won't make for much of a halo, so that can be ignored.  However, the soft flux below 1.5 keV should be better fit.  I don't know if it's even real, though - definitely a question for Norbert.

Understanding the spectrum

GX5-1 is bright.  Very, very bright.  So bright it saturates most of Chandra's detectors.  In fact, I had to get special permission to even observe it on the HRC, and had to use a far corner of the detector even then to avoid possibly causing problems on-axis.  So, understanding the HETG spectrum is going to be tricky.  I've started by downloading the observation from the archive (ObsID 5888), and then realized it was done in a special mode.  This isn't surprising, since the observation was done by the team that built the HETG, and they like to do hard problems that require tricky solutions.  So I downloaded a pre-processed version as well from the TGcat archive, which was put up by the same team to make analyzing HETG data easier.

[As an aside, the archive also lists scientific papers that have been written using a given dataset, and it shows that the HETG team has not yet written a paper on this data.  Although the data are now public, and thus available to all, it would rather rude of me to use their data to do their science without even mentioning it to them.  The archive does list the abstract to the original proposal they wrote, which indicates their goal was to study absorption edges of Si, S, and Mg from the interstellar medium.  Since I'm not going to do any of that science, and in fact am only using their data in passing to do other work, this shouldn't be a problem.  I'll still let the team know what I'm up to, just to be polite.  For an idea as to what can happen when scientists aren't polite, check out the Newton vs. Leibnitz debate.]

Next up is to check up on the issues that might affect the measured spectrum.  Chandra has a nice page of calibration information, which also includes links to the yearly calibration review workshop.  At coffee today, a colleague mentioned that Norbert Schulz of the MIT HETG team had given a presentation at the last workshop on the topic of ultra-bright sources observed with the HETG.  Checking the website, it's easy to find the talk notes.  Looking through these, it seems I'm going to need to contact Norbert directly to understand them in detail.  [Although I happen to know Norbert, I'd have no hesitation about contacting another scientist for assistance if needed.  The internet has been a great friend to scientists worldwide, making it much easier to contact each other with detailed questions.] 

While waiting to hear back from Norbert, I'll just try to see if I can fit the spectra I got from the TGcat.  Having been created by the MIT team themselves, they're likely going to be the best that can be done without extreme effort.  So, time to break out XSPEC.

Getting the spectrum


We'll start with GX5-1.  There are going to be two ways to get this spectrum.  Method 1 is to look for observations done close to mine (which was done Feb 10, 2007).  And it turns out there is one, a long observation done with the Chandra High Energy Transmission Grating (HETG) on Oct 30, 2006.  The  beauty of this observation is that it will measure an extremely accurate spectrum for the source.  However, X-ray binaries like GX5-1 tend to jump up and down in both brightness and spectrum on time periods even less than a day.  So, while this observation isn't useless to me, I could really use something on the same day.

For this, I'll use observations from the All-sky Monitor (ASM) onboard the Rossi X-ray Timing Explorer (RXTE).  Although this detector only measures in 3 energy bands, it does so all the time for a large number of sources, including GX5-1 and GX9+1.  So I can look each one up and see what each source was doing.  For GX5-1, the light curve of all 3 bands added together shows that there's a certain amount of variation, but it doesn't go nuts.  In fact, what's really useful is to look at a histogram of the count rates, to see just how much variation there is. 

What I get from this is that the average flux is ~65 RXTE ASM counts per second, and that on the day of my observation, the flux was pretty much average.  Helpfully, on the day of the HETG observation, it was also pretty average.  I also looked at the ratio of the bands individually, and confirmed that they didn't have a lot of variation on this timescale either.  So, although GX5-1 can vary at lot, it didn't choose to during this time period.  Quite helpful of it.

This means I can move on to analyzing the HETG spectrum and use that as a proxy for the spectrum during my observation.

Background work

In order to extract the scattered light from around these sources, I need to know how much of the light is from the telescope mirror itself so I can subtract it.  Imagine you're just gone outside on a foggy night and are looking at a streetlight.   Assume as well that you're wearing glasses which are a bit smudged.  So, you'll see the light, with a blur of light around it.  Some of that blur is due to the fog, and some is due to the smudges -- the technical name for this 2nd bit, the smudging, is the 'point spread function' (PSF) of the telescope, which simply means how much a point source (streetlight) is spread about by the telescope.  In my case, I'm interested in finding out about that fog (interstellar dust), and don't care so much about the smudging (PSF).  So I need to subtract the PSF, which scales with the brightness of the source and its spectrum.  X-rays come in all sorts of energies -- high energies are called 'hard' and low energies 'soft'.  As it turns out, the PSF is larger for hard X-rays than soft (they're 'harder' to focus properly).  So before I can subtract the PSF, I need to know how much to subtract at each energy.

This is where the operation becomes tricky.  The problem is that the specific detector I used for these observations (Chandra's High Resolution Camera, or HRC) is excellent at a few things, like dealing with ultra-bright sources without conking out and returning the highest possible resolution.  Unfortunately, it isn't so good at measuring X-ray energies.  Basically, it thinks everything is the same - in fact, the HRC uses the same technology as night vision cameras, which allow you to see at night but everything looks green.  However, just because the detector can't tell me what energies are in the source doesn't mean the telescope doesn't focus them differently.  So, I need some other way to measure the spectrum.

What I'm about...

Today is officially 'Day 2' of this particular Chandra X-ray Observatory data analysis.

Later on, I'll get around to explaining why what I'm doing is worthwhile -- or, at least, why I think it's worthwhile and how I was able to convince a panel of 5-8 people it was worth spending about 6 hours of time on a multibillion-dollar satellite.  For the moment, if you're interested in the science you can check out a previous paper of mine on a similar topic (X-RAY DUST SCATTERING AT SMALL ANGLES: THE COMPLETE HALO AROUND GX13+1).

So, the immediate goal is to extract the scattered light around two X-ray binaries, specifically GX5-1 and GX9+1.  Like most astronomical names, the origin of these come from an obscure source.  All of the 'GX' names come from observations done with an early MIT X-ray sounding rocket - the 'G' stands for 'Galactic' (all these sources are in the Milky Way Galaxy), and the X for 'X-ray'.  The two numbers following are the longitude and latitude of the source, where 0, 0 points to the center of the Milky Way, in the constellation Sagittarius. There's a supermassive black hole there, called 'Sgr A*', that has a mass more than a million times that of our Sun.  But, back to the GX sources.  The important bit about the is that they were seen by a sounding rocket, which could only observe for 5 minutes at time, and an early one at that.  Which means they're bright.  In fact, the GX sources are some of the brightest X-ray sources in the sky.  This is helpful, since it means the scattered light I'm really interested in will also be bright. Of course, there are ways in which it's not so helpful.  We'll get to that.

So far, I've downloaded the latest version of the data from the Chandra archive -- it lists as N004, so the data has been processed 4 separate times by the Chandra team already.  That's mildly unusual, but not shocking.  All it means is that the calibration done by the team has been significantly revised at least 3 times since the data were first taken.  At this point, all of the data are publicly available at the Chandra archive.  If you'd like to play along, the GX5-1 data have the Observation ID 7029, while the GX9+1 data are ObsID 7030 and 7031.  These were both done in 2006.  Normally the Principal Investigator (PI) has 1 year of private access to the data in exchange for being the one who put the initial effort into writing the proposal and suggesting there was something worth doing with it.  I'm the PI for this data, but I got swamped with other projects and didn't have a chance to get to it before the year was up.  Fortunately for me, the data analysis is rather tricky so I doubt anyone else is going to jump on it.

More as work progresses...