Monday, February 29, 2016

The Case of the Fickle Found Furby


Fast Radio Bursts

Early last week, Dusty discussed Fast Radio Bursts (FRBs, or as I prefer to call them, Furbies). To recap, they are short (~millisecond) bursts in the radio (GHz) frequencies, that seem to come from cosmological distances, as determined from their dispersion measures, related to the amount of time between arrivals of the pulse at different radio frequencies. The more free electrons between us and a radio burst, the longer this delay will be, with the time delay scaling as the inverse of the frequency squared. The column density of electrons indicate that these bursts come from either dense plasma environments, or from cosmological distances (with the electrons coming from the intergalactic medium).

The first detected FRB, known as the Lorimer burst. The $\nu^{-2}$ dispersion, large dispersion measure, and short duration define the class.
The other kind of Furbies, also radio emitters, but more easily localized, with varying optical colors.  

Since these bursts are so fast (it's in the name!), it is very difficult to figure out precisely where they are coming from. Until last week we didn't have any clear identification of a true distance or redshift. We know that they seem to be isotropically distributed in the sky (thus supporting the hypothesis that they are cosmological). Most have been discovered by the Parkes radio observatory in Australia, mostly due to survey design reasons, but several have also been seen by the Arecibo and Green Bank Radio Telescopes. One detected by GBT also showed signs of circular polarization, seeming to indicate that it was associated with a strongly magnetized/rotating object. So far about 17 have been detected, though when convolving this number with the beam size and observational strategies of the various radio surveys, this gives a rate of roughly thousands per sky per day!

A Furby Located?

Last Thursday, an extremely interesting Nature paper came out discussing the apparent localization of a Fast Radio Burst by an afterglow and the identification of a host galaxy. Keane et al. report that using their extremely impressive real time identification of FRB 150418. (most FRBs have been detected by an archival search of the data), a large number of followup telescopes were triggered with an afterglow in the radio (5 GHz and 7.5 GHz) detected by ATCA that faded over about 6 days.
From Keane et al. 2016, on the purported radio afterglow associated with FRB 150418. The detection are the two purple points and one green point on the left, with low level emission and upper limits after. The FRB was detected at time 0. 

If true this is a HUGE deal. It would be the first identification of an FRB host galaxy, providing a distance and redshift (z = 0.49). The combination of dispersion measure and redshift provide an estimate of the "missing" intergalactic baryon density, providing an estimate of the mass in the intergalactic medium (IGM) that agrees well with the cosmological results from WMAP.

The host environment, however is at odds with a magnetar being the source of FRBs. Having FRBs be associated with giant magnetar flares is, I feel, the leading candidate for their astrophysical source. However, magnetars exist in young stellar populations, while the galaxy in question is "red and dead", meaning it has an old stellar population with no star formation). Additionally the energetics of the afterglow (which contains much more energy than the initial FRB) suggest that it must come from something much more powerful than a magnetar flare. The authors even suggest that these factors indicate a NS merger as a source for the burst and afterglow. This, however, is problematic, as the rate of neutron star mergers is not nearly high enough to account for the inferred FRB rate, which is close to the supernova rate! If the NS-mergers accounted for a large fraction of the inferred rate of FRBs this would mean that 1) Short GRBs are extremely beamed 2) Initial LIGO should have probably seen a merger if they happen so often. The alternative many suggest is that there may be multiple progenitor classes (though the rates must still be comparable for it to have been picked up with so small a sample).

Keane et al. also examine the possibility of a false positive, by looking at the rate of radio transients, and conclude that there is a < 6% chance of a coincident transient detection. However, in the last few days it has emerged that only considering the radio transient rate may have been a crucial, but subtle mistake.

Not So Fast, Radio Burst!

Over the weekend Peter Williams and Edo Berger of Harvard CfA posted a preprint questioning the Keane et al. results. (They actually posted a link to their preprint on facebook on Friday morning, which is impressive turn around time indeed!) They argue that it is the rate of variable radio sources that should be considered, and show that there should be order unity of these expected in the beam of Parkes associated with the FRB. They argue that the "afterglow" then, is likely associated with AGN variability. Most compellingly (to me), they show how the evolving ratio of the 5.5 GHz to 7.5 GHz flux from the source is inconsistent with a relativistic fireball that one expects to be powering a true afterglow of a burst event. (This also jives with how it requires a lot more energy to power such an afterglow than most think would be available for a single compact object).

If the "afterglow" of FRB 150418 is indeed due to AGN variability, this would be easy enough to check with a campaign of follow-up observations, which of course Edo and Peter carried out. The result was this Astronomer's Telegram:
ATel 8752 from Williams and Berger reporting on the VLA follow up of the purported FRB host galaxy.

Using the VLA, they reported a 157 microJansky detection of the radio source, which suggests a re-brightening at the 3 sigma level. That's pretty damning of the afterglow interpretation, though there is no direct detection of variability in their two 1.5 hour observations. 

It is worth noting that radio transient/variability people that I know and respect think this closes the case, but some of my colleagues who study AGN for a living have told me that such variability from a radio quiescent AGN isn't actually expected (they said they expect such variability from a blazar source, though the optical observations make it clear it is not a blazar in this case). [EDIT - Edo writes to mention that since the source is only a few degrees from the Galactic plane, the variability seen is consistent with the level of ISM scintillation you'd expect from an AGN with no intrinsic variability. Interesting! Though pulsar people should not have missed this as they deal with scintillation all the time!] As an outsider to both fields I can't really provide a solid interpretation, but my general impression is that the radio sky (and particular quiescent radio galaxies) at these frequencies and timescales are not well studied. Regardless, the re-brightening, if held up, combined with the strange spectral evolution, does make it hard to believe the afterglow interpretation, and subsequently the association with a host galaxy and distance. 

A Furby, disassociated. 

Too bad!

Monday, February 22, 2016

Gearing Up to Find More FRBs

This post is inspired by a recent popular science article about the Molonglo radio telescope which has been upgraded and repurposed over the last few years to do, among other things, surveys of Fast Radio Bursts (FRBs). If you're not already familiar with FRBs, here's a brief primer.

From Lorimer et al. (2007)
The first FRB detection was published in Lorimer et al. (2007). Using data from the Parkes radio telescope, they detected an extremely bright, non-repeating, millisecond duration radio burst. At radio frequencies, pulsed emission is dispersed by intervening diffuse cold plasma, meaning low-frequency emission arrives at Earth after high-frequency emission. Measurements of the amount of dispersion allow for inferences regarding the total electron content between the emission and observation locations. With FRBs, the dispersion is so great that it cannot be explained by the electron content of the Milky Way alone. It is possible that much of the dispersion is due to propagation through vast swaths of the very diffuse intergalactic medium and that the sources of FRBs are at cosmological distances. If that is the case, their brightness hints at truly extraordinary energetics and a very exciting new type of astrophysical transient.

Several more of these FRBs were detected by Thornton et al. (2013), all with Parkes. With very small number statistics, it began to appear that FRBs were not tracking the stellar density of the Milky Way and were instead distributed seemingly isotropically, consistent with a cosmological source population. It was worrisome that all of these were being detected with Parkes. Parkes might have just had this serendipitous combination of field-of-view and sensitivity to make it great for detecting these things, or maybe there was some devious source of radio frequency interference influencing the observatory. And indeed, Petroff et al. (2015) realized that a microwave at Parkes, when opened prematurely, caused so-called "perytons" which almost perfectly mimicked the dispersive sweep of an astrophysical radio burst. However, not all FRBs were perytons. Many of them appeared to be bonafide astrophysical events. FRB detections at Arecibo and The Green Bank Telescope helped to allay microwave fears and build the case for these things being astrophysical in nature.

Now, back to Molonglo. The Molonglo telescope is over 50 years old. It has a giant cylindrical collecting area with hundreds of dipole antennas. Over the last several years, it has been subjected to a massive overhaul of its digital systems and is now capable of processing vast quantities of high time resolution radio data. It can synthesize many beams and survey large areas of the sky at once. It is a fantastic example of a relatively new trend in radio instrumentation: digital processing and software developments are driving much of the new science at low radio frequencies (below 2 GHz). Similar examples are the remarkable European LOFAR telescope and the soon-to-be complete Canadian CHIME telescope. This recent paper describes new and improved digital data processing capabilities at Arecibo for FRB searches. Note that Arecibo's funding situation is currently very precarious even as the case for its science capabilities is growing.

In the Molonglo article, Matthew Bailes from Swinburne University of Technology describes how there are currently more theories for what causes FRBs than there are observed FRBs. This is certainly the case. There are only about 15 published FRBs and theories for what may cause them range from flare stars to supergiant pulses from pulsars to merging neutron stars to core collapse supernovae and more. I'd say you should expect this to change within about 18 months. New and existing but improved instruments are going to start finding these things by the dozen. We're on the verge of learning a lot about FRBs.

  

Sunday, February 21, 2016

Random Topics: GW GRBs, mini-JWST, Exoplanets and lunch.

So after all the excitement last week, the subsequent week has been filled with discussions and other work getting caught up. For me, the last few days have been spent getting caught up on some review work that is going to be due soon, so this will be a fairly short post.

Fermi


On Friday, for our high-energy astrophysics meeting at UMD, we had a nice round-table discussion with one of the Fermi team regarding the Fermi-signal coincident with the LIGO detection GW150914. Their arXiv (not-yet peer-reviewed) paper is here, and discusses their detection.

Fermi-GBM detected a signal with a fluence of $10^{49}$ ergs/s  between 1 keV and 10 MeV, lasting about a second. The fluence, duration, and hardness was similar to a weak short-GRB. The reason it was so poorly localized was that the Fermi spacecraft was pointed in orthogonal to the optimal direction to localize the signal source. They characterize this as a roughly 2-3 sigma detection, with a 0.0022 false alarm probability, though there are some details of this FAP calculation that I don't quite follow (particularly, why this scales with time from the GW trigger event).

It is worth noting that Fermi expects to see events of this significance randomly every few thousand seconds or so, therefore there is a possibility this is just a coincidence. SWIFT-BAT did not detect anything, though it was also pointed in the wrong direction and doesn't have as wide field of a detector, so they had to make do with observations 2 days after the fact, during which there was no detection. More interestingly, the INTEGRAL-ACS which has a much larger collecting area than Fermi-GBM, (it's a coincidence shield that surrounds the spacecraft) but also much higher background, did NOT see anything at the same time as the Fermi-GBM detection. However, they are sensitive to a different energy band, and if the short-GRB like spectrum is taken as a given, INTEGRAL is expected to miss about 50% of the bursts Fermi catches at that fluence.

This of course did not stop many many so-called "creative" ideas from flooding the arXiv about how a BH-BH merger produced an electromagnetic GRB-like counterpart. I won't get into them here, and will leave the discussion for another day.

Mini-JWST


This isn't really all that research related but I just wanted to post a picture of our new JWST model we just got from MESAtech. The James Webb Space Telescope is of course the very expensive IR space telescope that carries the hopes, dreams, and opportunity cost of the entire astronomical community into the future. It will be revolutionary for extragalactic astronomy, mapping of the local group, and exoplanetary atmosphere studies, among other things.


This 1/40th scale model was 3D printed as a reward for kickstarting MESAtech's fully robotic model for high-school outreach. The pieces were all snap together and Vicky assembled it in about 45 minutes, with just some wire snips and a few pieces of tape to help tighten up some fits. We got it for $50 during the kickstarter, but they are now available for purchase from the MESAtech site for $75. It folds/unfolds, and at 1/40th scale is just about the right size compared to LEGO minifigs.


Exoplanets and Lunch


Lastly, this week I organized the inaugural Exoplanet lunch at UMD, in order to bring together Drake's group with some other exoplanet researchers, like myself. Personally, I'm hoping that a closer interaction with the exoplanet observers will lead to some interesting projects that will help me get a better feel for the details of exoplanetary data analysis and observations. Towards that end, I'm considering running a brief tutorial over the course of several lunches exploring the use of Dan Foreman-Mackey's emcee code for doing parameter estimation using Markov-Chain Monte-Carlo methods. Jake van der Plas has a very nice tutorial he ran at the last AAS, which would be a very good basis to start from. 

Thursday, February 18, 2016

A personal view of GW150914 + links to released data and relevant papers

A LIGO (Livingston?) prototype drawn by David
The whole world has been rejoicing the finding of the first gravitational waves. This is 40some years after the 1972 MIT paper that outlined the basic design of LIGO and estimated the principal noise sources in a kilometer scale detector. The site construction began 1994, and by the time I started college in 2001 both the LIGO sites and GEO600 were operating, and started taking data. I visited the Livingston detector in 2003. My brother, Mihai Bondarescu, and I were organizing the first gravitational wave course at Louisiana State University. It was sponsored by Edward Seidel and Gabrielle Allen, but it could not take place in the regular semester because we were students ourselves and had free time only in vacation. So, this course happened in the winter break. Our students were so enthusiastic that they showed up to class on Xmas eve, and New Year Day. They were all part of the LIGO collaboration, and proudly showed us the detector. They had come to our course on vacation because they were hoping that LIGO would detect gravitational waves soon, and wanted to be prepared. 

Sensitivity: Today's Advanced LIGO vs. Last Run of Initial LIGO - 2010
It is amazing that the community persisted for so long, and succeeded. It is not unexpected. Most scientists thought that advanced LIGO would find gravitational waves, but just not in one of the Engineering runs. I am particularly humbled by this in part because it happened the first day Andrew Lundgren was acting as LIGO's detector characterization chair. So, 15 years after sitting in Kip Thorne's group meeting, and watching him lecture on gravitational waves, I had close to a front row seat to their detection.

When Gabriela Gonzales and Michael Landry confirmed that it was not a blind injection, Andy stared shaking. He regained his composure enough to continue running the first conference call after the detection, and although it ran overtime, they went through the agenda as planned.  The last time he shook like that was when he drove the space shuttle at the age of 10. No, it was not the real space shuttle, it was just a very realistic simulation built by NASA. But the gravitational wave was real.

So, what does the detection mean? This detection proved that black holes in the 20+ solar mass range exist in the nearby universe, and that they coalesce just as we predicted from numerical relativity simulations! This was the first detection of such objects and it was done by measuring the emitted gravitational waves. We were able to measure space-time shakes caused by a binary black hole coalescence. These waves traveled unobstructed to Earth at the speed of light and reached us 1.3 billion years later. They then stretched and squeezed the arms of the two LIGO detectors, which are 4 kilometres long, by about 1/4000 of the size of an electron. This stretching and squeezing was measured with incredible confidence! The 5.1 sigma detection entails a false alarm rate of 1 in a few million, i.e., the experiment would have to be repeated more than 3 million times to expect the same conclusion by accident.

The data surrounding the event is publicly available for download.
The first 16 days of coincident data ("coincident" means both the Hanford and Livingston LIGO detectors were on and had good quality data) has been thoroughly analyzed. The data around the GW150914 event is released. Note that you have click on the "Gravitational Wave Strain Data" link to get it. They released 4096 seconds, which is a bit more than an hour around the GW150914 event. This is processed data, e.g., the lowest and highest frequency are damped through filters to make the signal more clear.
Two Black Hole Binaries? FAR is the False Alarm Rate, which is very low. A 5.1 sigma detection is very confident.
They found one event at 5.1-sigma (29 and 36 solar masses) and another weaker candidate (13 and 23 solar masses) at 2-sigma in 16 days of data. There are more than two months of data from Observing Run 1, which started on September 12 and ended in January 12, that have not been analyzed. So, stay tuned for more details from the black hole world!

Papers: The LIGO collaboration (~ 1000 people) wrote 12 technical articles that report on this event and on the detailed investigation that followed. I originally wanted to write more about each them, but I am letting Andy do it because he understands them better. What I want to re-emphasize is that there are two potential events. The second one is described in the "First results from the search for binary black hole coalescence paper with Advanced LIGO" paper, which is where I have the table from.



LISA Pathfinder: Another success. As pointed by Leo Stein in the previous post, LISA Pathfinder is at the Lagrange point L1. The pull of the Earth and the Sun are equal there. It reached its destination on January 22, and just yesterday (February 17) the second test mass has been successfully released. The purpose of the Pathfinder mission was to test LISA technology, which will be our first human-built, space-based gravitational wave detector. LISA will target a lower frequency band in the gravitational wave sky where supermassive black holes from the centers of galaxies merge. It will also see white dwarfs and neutron star binaries. It is planned to be launched by the European Space Agency in 2034. We hope that the US will re-join the mission and bring back its third arm, which was lost when descoping to fit in a smaller budget. My son, Edward, drew LISA with all three arms in his and David's most recent book. They were so excited by this detection that they decided to introduce the topic of gravitational waves to children (and parents) everywhere.

Wednesday, February 17, 2016

A great week for gravitational waves and black holes

What a great week! There were at least four pieces of great news for us researchers working on gravitational waves, general relativity, and black holes. In no particular order:
  1. Announcement of first direct of gravitational waves by LIGO

     In case you were away from the internet for the past week and for some reason our blog was your first stop: LIGO did it! This was the first detection and many more will come. Dave blogged about his reaction here. I don't think I need to write much about this—go read the detection paper, Emanuele's Physics Viewpoint piece, and take a look at some of the follow-up papers.
  2. LISA Pathfinder successfully released its two test masses
    The release was the mechanism that people were most worried about—would it stick? But the LPF team has demonstrated that their technology works! Also, apparently we've entered that era where our news comes from Twitter:
    Just kidding, you can go read the press release.
  3. Astro-H launches successfully

     Hitomi (the name after the successful launch) is an x-ray calorimeter spectrometer satellite. Its spectral resolution is going to knock the socks off of older x-ray missions. This is crucial for careful measurements of accretion disks around stellar-mass black holes, which can be used for tests of general relativity (ok, it can also be used for a whole lot of other astrophysics).
    See the press release.
  4. LIGO-India granted 'in-principle' approval

    I don't really know the full politics regarding why it's in-principle, or why that has to be in quotes. Maybe somebody wiser than I can read between the lines in their press release. Anyway, it would be fantastic for gravitational wave science to get another detector on the opposite side of the Earth! One of the most famous researchers of general relativity, Subrahmanyan Chandrasekhar, was Indian, and lots of excellent researchers have followed in his footsteps.
The future of gravitational wave astrophysics is looking very healthy!

Friday, February 12, 2016

GW150914: Welcome to the New Era of Gravitational Wave Astronomy

Dave's Take: WOW! What a day yesterday! The long-awaited announcement of LIGO's first detection of gravitational waves was a doozy. LIGO detected a merger in a region of parameter space nobody had expected to dominate the event rates: the merger of two $\sim 30 M_\odot$ black holes about 400 Megaparsecs away. The main paper discovery paper is here, with the astrophysical implications here. Most of the figures below are from the discovery paper.
Cartoon of a merger and estimated gravitational wave strain from numerical relativity. The two ~30 solar mass black holes merged, reaching more than 1/2 the speed of light, and radiating off ~3 solar masses in gravitational waves. 


Parameters of the BH-BH merger GW150914 detected by Advanced LIGO, deduced by comparison to numerical relativity models. Quoted errors are 90% confidence intervals.  
This is the culmination of years of work done the LIGO team and we are incredibly proud to know and work with some of them (including blog contributors Andy and Jocelyn). I got a bit choked up while watching the press conference from UMD, thinking about how long Kip Thorne and Rai Weiss have been going at this. The sheer level of vision that was required to even begin this ludicrous scheme to detect gravitational waves is astonishing.

How Big of A Deal Is This?

Short Answer: This is probably the greatest scientific achievement so far during our lifetimes. Several non-scientist friends have asked me how this compares to the discovery of the Higgs. Personally, I think this easily tops the Higgs. Both have been capstone detections on top of well-confirmed theories. But while the Higgs, in some ways, closed the book on standard-model physics, the first gravitational-wave detection opens an entirely new chapter for astronomy. We can now start to probe physics of the most extreme masses, speeds and densities in the universe. It's truly a new frontier for astrophysics.

A Signal Detectable By Eye


The GW150914 event, measured from the two LIGO detectors. The measured strain shown in the top panel has only had the sharp known line noise removed (whitening) and been filtered by a bandpass between 35-350Hz, and you can pick the signal out by eye from the data. This is ridiculously impressive. 
This is the most gorgeous figure I've ever seen. Almost nobody expected the first signal to be this strong. Everyone who has been working on gravitational-wave astrophysics was expecting for the first signal to be laboriously teased out of the noise by careful matched-filtering algorithms, with integration built up over hundreds of cycles. Because these masses were so large, and it was relatively close, in terms of the LIGO detection horizon, the signal is enormous. In the figure above the top panels only had a  wide bandpass filter applied, along with whitening (this is where you remove strong noise spectral lines, like the 60Hz power-grid line and harmonics). And you can see the signal by eye! This is beyond extraordinary, and beyond the wildest dreams of those of us who have been yearning for a detection.

You might have read in the discovery paper that the signal to noise ratio of the detection was only (!!) ~24 or so. How can this be with such a massive whopper of a signal? This is another consequences of the unexpectedly large masses of the black holes involved. In this merger dominated signal, the peak of the chirp was at ~130 Hz, and the majority of the inspiral was at frequencies lower than the optimum LIGO band in the 10s to few hundred Hz. With only about 10 cycles until the merger, this means there wasn't as much matched-filtering integration time over the inspiral as you'd expect for less massive events like NS-NS mergers. For those of you who are computationally inclined, LIGO has released an open iPython notebook where you can do the data analysis (or at least matching to the numerical relativity signal) yourself! Good times!

This signal was so loud and so unexpected I think everyone assumed it was a blind injection. They had to be reassured by the few involved in blind injections that it wasn't, allowing them to check all the injection channels. On a personal note, after hearing more secondary rumors about the parameter details in October, I had bet a colleague a bottle of wine that it must have been an injection because of the odds of such an event by existing population synthesis was so low! This will be the best bottle of wine I will ever spend.

I'm sure we'll each have much more to say about the science surrounding this detection in the next few days. For instance I'd like to make a post later discussing where such a system could come from, as well as discussing a weakish detection that may have been seen by the Gamma-ray burst monitor on the Fermi spacecraft, which, if associated with the GW150914, would be completely unexpected.

Detection Claims, Rumors and Community Insecurity

The crowd here at UMD gave a heartfelt round of applause to Kip's mention of late UMD professor Joseph Weber. Kip, being Kip, was very gracious to the pioneering work done by Weber and his early (but sadly unsuccessful) attempts at building a gravitational-wave detector using resonant bars. For the modern gravitational-wave community Weber's ultimately discredited claims of detection were a strong source of group self-consciousness and insecurity. Those working on gravitational-wave detection (using interferometric techniques like in LIGO, or more recently begun pulsar timing methods) have been paranoid about making detection claims that they would be forced them to roll back, because of the perceived damage that was done to the reputation of the field in the 70's when Weber's detections were discredited.

This was exacerbated by the debunked BICEP2 claims of primordial gravitational-wave just 2 years previous. The press-release announcement and subsequent drawn-out identification of the dust signal contamination served to once again undermine public trust in gravitational wave claims. The overblown coverage and YouTube stunts only served to make it worse.

The result of this unfortunate history in gravitational wave "detections" is that LIGO was super-paranoid about making any sort of claim before things were checked a thousand times over, and the results were peer-reviewed. We can only imagine how badly Lawrence Krauss's self-aggrandizing rumor mongering was taken. I think Krauss has burned a lot of the capital he had within the scientific community for this self-promoting stunt, though he seems oblivious to it, (RUMOR HAS IT) sending the LIGO collaboration an email "apologizing" but implying they should be thanking him.


Leo's chirping Kip the Thrush for #DrawABirdDay

Thursday, February 11, 2016

The Memory of GW150914

Most of the RemarXiv contributors and the astronomical community at large have been very busy today watching press conferences (in person in some cases), reading papers, and generally overflowing with excitement over the first direct detection of gravitational waves in an event dubbed GW150914. I personally viewed the press conference remotely at the Green Bank Telescope (see the picture I snapped), which I thought was pretty cool! 


This detection is a triumph for theoretical physics, experimentalists and engineers, computational scientists, and the whole Big Science model. How perfect is it that this detection comes 100 years after general relativity was first let loose on the world? This detection provides unprecedented support for general relativity in the dynamical strong-field regime. As such, I'd like to talk about an aspect of GW150914 that general relativity predicts should be there but that LIGO didn't detect. 

From Favata (2009)

Intense bursts of gravitational waves should be accompanied by something called "memory", a phenomenon that Kip Thorne himself has been writing about since the 1980s (see here). In the case of GW150914, what that means is that through the intense chirp part of the event, a permanent component (called the memory) builds up in the gravitational waveform that causes the wave to never actually relax back to zero amplitude (see the above figure from Favata 2009). As gravitational waves expand and contract the physical distance between free-falling masses, memory permanently changes the separation between two such masses. Now, LIGO won't detect this memory because their test masses aren't technically completely free-falling; they're suspended by these awesome four-stage pendulums that nonetheless provide some small restoring force that will erase the permanent displacement that should be caused by the memory.

Nonetheless, if general relativity is right, the memory should be there. I estimate that the amplitude of the memory component should be about a tenth of the maximum amplitude of the oscillatory part of the wave, or a strain of approximately $10^{-22}$. That means that the permanent displacement from memory of LIGO's mirrors would have been (save for the aforementioned restoring force from the pendulums) about one-ten-thousandths of a proton radius. But let's think big. The Milky Way is about one hundred thousand light years in diameter and is composed of truly free-falling test masses. As GW150914 traverses the Milky Way, the memory it ought to carry with it will permanently change our galaxy's diameter by about one meter. 




Wednesday, February 10, 2016

Testing General Relativity with Gravitational Waves

As we all wait for tomorrows LIGO press conference, I though that I could write a brief post on a relevant recent arXiv paper and say a couple of things on why tomorrows event is a big thing.

This is partially inspired by a twitter discussion with Leo after the paper came out.


Since I am not a gravitational waves person, the discussion will not be any deep.

The resent paper, as you can see, is arXiv:1602.02453 (from Feb. 8 2016) titled: "Testing general relativity using golden black-hole binaries".

What the paper calls "golden black-hole binaries" are black hole binaries with a total mass in the range of $\sim 50M_{\odot}-200M_{\odot}$ that will produce a signal that all the way from the inspiral phase, to the merger phase and finally to the ringdown phase will be within the observing capabilities of ground based observatories like LIGO. And the proposal is that this sort of signals can be used to test General Relativity (GR) in the strong gravity regime. The proposed test is a null hypothesis test, i.e., the test assumes the validity of GR and then tests for consistency. The idea is that if the hypothesis, validity of GR, is correct, then the results that rely on the hypothesis should be consistent. If the results are not consistent, then the hypothesis is in trouble.

Since the test is all about consistency, it is critical to be able to observe all the phases of the binary system evolution, because the consistency check will be between the initial state, i.e., the two initial black holes that inspiral and merge, and the final state, i.e., the resulting object that will be initially perturbed and through the emission of gravitational waves it will relax to a final stationary state.

More specifically, the idea is the following. From the initial phase of the signal that covers the inspiral up to the merger, one could extract information about the masses $M_1$, $M_2$ and the spins $S_1$, $S_2$ of the two initial black holes. This can be done by match-filtering of simulated gravitational wave signals. One of the technics that are used to produce the simulated signals is that of the effective-one-body (EOB). This is an approach for describing the inspiral phase of binary black holes that was initially developed by A. Buonanno and T. Damour and has been further developed by them and their collaborators. The EOB description of inspirals is an analytic model of the inspiral and has been very successful and gives very accurate results that agree with the numerical simulations up to almost the final orbit before the plunge.

As I was saying, from the inspiral phase one can extract from the signal the masses and the spins. Now, from the final ringdown phase one in principle can extract from the frequency spectrum the mass $M_f$ and the spin $S_f$ of the resulting black hole, thanks to a huge amount of work that has been done on this. Of course, as Leo was pointing out yesterday, this calculation is model dependent, i.e., one would have to assume that the resulting object is a GR Kerr black hole for example.


In any case, under this assumption one has a mass and spin for the resulting black hole. Therefore, from the signal we have a measurement of the properties of the two initial black holes and the properties of the final black hole. GR numerical simulations of mergers can tell us what will be the end state of the merger given the initial state, i.e., the two initial orbiting black holes. Therefore one could predict the end state properties and compare them to the properties measured from the ringdown. And this is the null hypothesis test since throughout the validity of GR is assumed.
The authors finally propose that even if GR might not be tested with high precision from one signal, the combined statistics from multiple events could place strong constrains on deviations from GR.


And this is all I can tell about this paper. Any insights from the experts would be most welcome.

I only want to end with a final note. The actual detection of gravitational waves will of course be a significant event in itself but it will not be the most important aspect of the thing. The most important aspect of the actual measurement of gravitational waves will be the amazing physics and astrophysics and cosmology that we will be able to do with them, testing our theories beyond anything we have managed to test so far (as this preprint describes for example). Radio waves, X-rays and $\gamma$-rays revolutionised astronomy, changed our view of the universe and opened our eyes to an amazing plethora of phenomena that furthered our understanding of how the cosmos works. Gravitational waves have the potential to provide us with an even more spectacular view.

Cheers.

Monday, February 8, 2016

LIGO press conference 10:30am (EST) Thursday 2/11/2016

Today, at work, most of the day has been devoted towards being excited about the the long-rumored, long awaited announcement of a LIGO detection. While I'm sure we will all have much more to say about this in the weeks to come, the press conference, (and coinciding paper release?) will take place this Thursday, Feb 11, 2016 at 10:30 am (EST) at the National Press Club in Washington, DC, along with a simultaneous press conference in Italy, with the VIRGO collaboration.

You can find the link to the live-stream here. (It will be updated an hour before the event starts.) Many of us on the blog have been working on gravitational-wave related science for a long time, and this is the will likely announce the start of a new era of gravitational wave astronomy.

Very exciting times!!!


EDIT - As Leo points out in the comments below, if anybody has any questions about this, we will be here to answer them!

If you want to learn something about gravitational waves before the announcement, some of our members have done some podcast primers with the Titanium Physicists podcast:

Ep 6. How gravitational wave interferometers (LIGO) work  (with Dave and Jocelyn)

Ep 29.  The Laws of black hole merger.  (with Dave and Jocelyn)

Ep 47. The “sound” made by two black holes about to merge together (with Dave and Jocelyn)

Ep 62. The “sound” made by a black hole after it collides (with Leo and Chiara)


The Slimplectic Integrator (pt II): The Nonconservative action principle and Variational Integration

In part I of this post I discussed how N-body variational integrators can be constructed (by discretizing the action integral itself, then finding the Euler-Lagrange equations of the discretized action), and why Noether's theorem means that this will give good long term behavior for conserved quantities like Energy, or Angular Momentum.

However, because this method relies on the existence of an action, to be discretized, it only works for conservative systems. In order to be able to apply this numerical method to systems where there is dissipation, drag or some other nonconservative (polygenic, path-dependent, or irreversible) forces we will need an equivalent of the action for such systems. Additionally, in order to guarantee good long-term behavior, we will also need a version of Noether's theorem that applies to such an action.

The Nonconservative Action Principle


In Galley (2013) and Galley, Tsang & Stein (2014), we recently developed such a Nonconservative Action Principle, adapting Hamilton's principle to apply to nonconservative physics.

Nonconservative forces can arise when a subset of the true physical degrees of freedom of a system are ignored, often through a natural separation of scales, or by experimental design. 

Even though, micro-physically, a damped harmonic oscillator is well described by a Lagrangian or Hamiltonian if one includes all the particles and/or thermal degrees of freedom in the damper, when one integrates-out those micro-physical degrees of freedom the system should look nonconservative
Usually we are forced to "integrate out" these inaccessible degrees of freedom at the level of the equations of motion. When integrating out at the level of the (normal) action, things tend to become acausal, where the evolution of the system depends on both the initial and final configurations of the system (and thus inconsistent with the correct equation of motion description).

The reasons for this are technical (and may be worth another post), but are related to how Hamilton's principle is traditionally constructed as boundary value problem in time, rather than an initial value problem.
Left: Hamilton's principle requires a minimization of a path between $q_i$ at time $t_i$, and $q_f$ at time $t_f$, describing a boundary value problem in time. Right: The nonconservative action principle doubles the paths, allowing the position at $t_f$  to vary, consistent with an initial value problem. The paths are varied separately, with an equality condition at the final time, and then set equal after taking the Physical Limit. 

Instead Chad constructed the nonconservative action principle to be consistent with an initial value problem. This involves doubling the degrees of freedom and varying each separately subject to a final time equality condition that closes the loop (see figure above), allowing the final time values of the system to vary freely. This is related to the closed-time-path Schwinger-Keldish formalism in non-equilibrium quantum mechanics (in fact in an appendix here we showed how this formalism is the formal classical limit of such a quantum theory).

By moving to this extended, doubled phase-space, extremizing the action over each path, and then equating the doubled variables together (we call this taking the Physical Limit), we gain the freedom to allow nonconservative effects to be included. These nonconservative effects show up in the nonconservative potential $K(q_1, q_2, \dot{q}_1, \dot{q}_2, t)$, which couples the two paths together, and can appear in the nonconservative action. By construction, inaccessible degrees of freedom can also be integrated-out in a self-consistent way, at the level of the action, with no acausal effects.

To construct the nonconservative potential, $K$, which captures the non-hamiltonian physics, one can either build it from known dissipative equations of motion through an adjoint procedure, often equivalent to the Lagrange-d'Alembert approach, or one can build it from known microphysical actions by integrating out inaccessible degrees of freedom.

The Nonconservative Noether's Thoerem

We needed two parts to make a variational integrator work, an action principle, whose action integral can be discretized, and Noether's theorem, to exploit the symmetries of the action to correctly evolve (or preserve) the Noether currents. It turns out that we can prove a generalized Noether's theorem for the nonconservative action principle. The Noether currents are still defined by the symmetries of the conservative parts of the action, but their evolution is described by derivatives of the non-conservative potential $K$.

The nonconservative version of Noether's theorem.


The Slimplectic Integrator


By applying the variational integrator procedure (discussed in part I) to this new action principle we are able to construct general nonconservative variational integrators that are no longer limited to conservative systems (see figure below). We call these “slimplectic” integrators as the phase space volume tends to slim-down for dissipative systems). Slimplectic integrators have all the long-term stability properties of symplectic integrators but can be applied to general nonconservative systems.

This allows long-term numerical evolution of orbital dynamics problems where dissipative or velocity dependent effects, such as tides, drag, or magnetic interactions, can become dynamically important.


Top: For a standard numerical integrator equations of motion are discretized using well-known methods (e.g. Runge-Kutta) in order to derive discrete equations of motion that can be solved numerically. Such algorithms tend to experience long-term instability of quantities that should be physically conserved, as these discrete equations of motion are not connected to the action, and Noether’s theorem does not hold.
Middle: Variational Integrators work by first discretizing the action integral using numerical quadrature. Varying the discretized action produces already discretized equations motion that can be exactly (up to round-off) solved numerically. If the choice of discretization preserves the symmetries of the original action in the discretized action, then Noether’s theorem guarantees the solutions of the discrete equations of motion will preserve the constants of motion (e.g. angular momentum) up to round-off. This is why variational integrators (such as symplectic integrators) have such impressive long-term conservation properties compared to other numerical methods.
Bottom: By applying the Variational Integrator procedure to the Nonconservative Action formalism, we have developed a new type of integrator that has all the long-term integration benefits of a symplectic or variational integrator, but that can be applied to nonconservative systems. By discretizing the Nonconservative Action, and applying the nonconservative variational principle, the discrete equations of motion are guaranteed by the generalized Noether’s theorem (Galley, Tsang, & Stein 2014) to evolve the Noether currents “correctly” during long-term integrations.
We have developed a simple public python-based demonstration code on github that can be used to numerically integrate any system by specifying the (nonconservative) action. Included below are links to several example of a nonconservative system evolved using the slimplectic and traditional (Runge-Kutta) numerical schemes. The slimplectic integrators have fractional energy error that remains bounded, even as the system’s energy evolves, while the traditional methods have fractional energy errors that grow linearly with time.

The slimplectic iPython notebook samples are:

A damped simple oscillator
Poynting-Robertson drag (see figures below)
Post-Newtonian gravitational-wave inspiral of neutron star binaries

In this example we model the effect of Poynting-Robertson drag on a dust grain as it orbits the sun. Poynting-Robertson drag occurs because the dust grain absorbs incoming radiation from the sun, and re-emits in the frame co-moving with the dust particle. This leads to anisotropic radiation in the observer’s frame and a net dissipative drag force. In this case the dust particle begins with an eccentricity 0.2, and semi-major axis a = 1 AU.


The RK methods have poor long term stability, with energy error growing linearly with time, while the energy error remains bounded for the slimplectic methods. However, there is significant precession in the 2nd order slimplectic method, due to the lack of conserved quantity that prevents precession. The phase errors of the slimplectic methods scale linearly with time, while the phase errors of the RK method scale quadratically. The growth at the end of the evolution for the 4th order Slimplectic method is likely due to a build-up of zeroing/round-off error due to the simple implementation.






Sunday, February 7, 2016

Prescriptions for measuring and transporting local angular momenta in general relativity

I, along with my collaborators David Nichols, Justin Vines, and Éanna Flanagan, posted our latest preprint, Prescriptions for measuring and transporting local angular momenta in general relativity, on the arXiv tonight. After I tweeted the link, George informed me that I was now obligated to blog about it:

Ok, so here goes a little self-promotion!

This paper is a refinement on David and Éanna's earlier paper. The topic at hand is angular momentum in general relativity.

So, how is angular momentum defined in GR? One quantity that everyone agrees upon is the ADM angular momentum, which is precisely the Noether charge associated with the rotational symmetry of asymptotically flat spacetime. Masha recently touched upon the ADM mass in this post.

Angular momentum is related to a symmetry, so of course we must care about it. From Emmy Noether's theorem, we know that this total angular momentum is conserved. In a sense we can say that the ADM angular momentum (and mass) "live at $$i^0$$," the point at spacelike infinity.


In this paper, we were studying a different type of angular momentum. Instead of living at $$i^0$$, we are interested in a quantity near $$\mathcal{I}^+$$ (pronounced "scri plus"), the region of future null infinity. Here we're talking in the language of conformally compactifying an asymptotically flat spacetime, so we can squeeze the whole thing into a Penrose diagram like this one. Every spacetime which is asymptotically flat has the same structure "far away" from all the curvy bits of spacetime.

Why do we want to study angular momentum near scri? One reason: it's very natural to think of angular momentum decreasing when e.g. gravitational radiation carries it away from a system—like a black hole binary, allowing the two black holes to eventually merge! But the ADM angular momentum is conserved, so what gives? To understand this, you have recognize that the ADM quantity comes from integrating along a "Cauchy surface," a hypersurface that's everywhere spacelike and makes it out to $$i^0$$, like any of the $$\Sigma_{1,2,3}$$ in the Penrose diagram.

Spacelike infinity ($$i^0$$) is not the right setting to discuss things like how much angular momentum is carried away by gravitational waves, because the ADM angular momentum can't change! Instead of talking about quantities that "live at" $$i^0$$, we want to talk about quantities that "live at" $$\mathcal{I}^+$$.

Future null infinity ($$\mathcal{I}^+$$) is much more complicated than $$i^0$$. Spacelike infinity is kind of "rigid," while $$\mathcal{I}^+$$ is comparatively "floppy," though less floppy than the interior (bulk) of the spacetime. These statements can be made mathematically precise in terms of the symmetry groups of these spaces. The symmetry group of $$\mathcal{I}^+$$ is the famous BMS group, named for Bondi, van der Burg, Metzner, and Sachs, who studied its structure in the 60s. The BMS group has enjoyed a renewed interest in recent years, when it was discovered that BMS symmetries, Weinbergesque soft theorems, and long-ranged "memories" are different faces of the same underlying physics.

So, if you want to discuss angular momentum at $$\mathcal{I}^+$$, the BMS group is going to tell you the mathematical rules you have to follow (technically: quantities must "live in" representations of the group). For a while, the literature referred to something called the BMS angular momentum "ambiguity." However, ambiguity is not really the right word. Angular momentum is not ambiguous, it just transforms in a much more complicated way (under BMS transformations) than angular momentum does in flat (Minkowski) spacetime).

Ok, enough with the longwinded background. What did we actually do in our paper?

First of all, we worked in a simplified setting: when the region of interest of $$\mathcal{I}^+$$ is approximately stationary. In this setting, angular momentum has a much simpler transformation law when you move from point to point than in the fully dynamical setting.

Let's consider what happens when you expand around stationary $$\mathcal{I}^+$$ in powers of $$1/r$$. To leading order, the only property of spacetime is the Bondi mass. When spacetime is stationary, this is constant, and every stationary asymptotically flat vacuum spacetime is identical to Schwarzschild expanded to this order. When you go to next-to-leading order, you learn one additional property of the spacetime, which is like an angular momentum of the spacetime. Here is a crucial fact we make use of: every stationary asymptotically flat spacetime vacuum is identical to the Kerr spacetime expanded to this order.

Now, in the Kerr spacetime, it just so happens that there's a pair of tensors $$(\xi^a,{}^{*}\!f^{ab})$$ which satisfy the pair of differential equations:
\[
\nabla_a \xi^b &=  -\frac{1}{4} R^{b}{}_{acd} {}^{*}\!f^{cd} \, , \\
\nabla_a {}^{*}\!f^{bc} &= -2 \xi^{[b} {\delta^{c]}}_a \, .
\]
We can interpret these equations as a rule for how to transport a linear momentum $$\xi$$ and an angular momentum $${}^{*}\!f$$ from point to point in a consistent fashion (the quantity $${}^{*}\!f$$ is not really an angular momentum, but that's another topic).

So here's hope that a bunch of observers hanging out in spacetime can locally measure what they think are the linear momentum $$P^a$$ and angular momentum $$J^{ab}$$ of the spacetime about them. And moreover, if Alice measures her quantities $$(P,J)_A$$, and Bob measures his quantities $$(P,J)_B$$; and then Bob drags his quantities over to Alice in a certain well-defined way, then their two quantities will agree, up to the required accuracy!

David Nichols has laid out the 8-step procedure for locally measuring $$(P,J)$$ up to the required accuracy, which is really impressive to me.

My contribution to the calculation was the fiber bundle approach, which really makes it quite easy to compute the holonomy of this transport law, and makes clearer what are the necessary and sufficient conditions for the existence of consistent solutions to the transport differential equations. The fiber bundle idea is represented pictorially in the first image at the top of this post.

Read the paper for all the gory details! It's only 9 pages long. Satisfaction guaranteed or your money back.

Thursday, February 4, 2016

Quasi-Local Mass in General Relativity (for Numerical Relativists)

How would we, in general relativity, define conserved quantities such as energy for a finitely extended region of spacetime? We know how to handle mass and energy asymptotically, of course, but there's currently no agreed upon notion of quasi-local mass/energy (QLM/E)*

Such a quantity would be nice to have because statements in GR about gravitational collapse, such as the hoop conjecture, are concerned with mass content in a finite region of space. Likewise, in numerical relativity, where we work in a finite domain, we could compute such a quantity throughout a simulation for a highly dynamical binary system, and extract new physical information.

We thus look at Po-Ning Chen and Mu-Tau Wang's recent paper, which in part reviews several quasi-local mass quantities, and see which of these would be useful for numerical calculations.

First, we would like our definition of QLM/E to satisfy the following properties

  1. Rigidity: QLM = 0 in Minkowski space
  2. Positivity: QLE ≥ 0 under the dominant energy condition
  3. Asymptotics: The QLM should reduce to the definitions of asymptotic (ADM, Bondi masses) and local masses (Bel-Robinson tensor, matter density), in the large sphere and small sphere limits, respectively. 
  4. Monotonicity: The QLM should increase when we would expect it to increase (though it doesn't have to be strictly additive because of negative gravitational binding energy). 
These are mathematical properties that we would like to satisfy; an additional property for the  numerical relativists among us could be 

     0. Nice to compute in simulations

Presently, we don't have a quasi-local mass definition that is proven to satisfy all of the mathematical properties, but let's walk through the definitions that we do have, and talk about the (dis)advtanges of each.

***

Hawking Mass

\[m_H(\Sigma) = \sqrt{\dfrac{|\Sigma|}{16\pi}}\left(1 - \dfrac{1}{16\pi} \int_{\Sigma} |H|^2 d\Sigma \right) \]
where $$\Sigma$$ is a spacelike 2-surface (the boundary of our region) and $$H$$ is the mean curvature (trace of the extrinsic curvature) vector of $$\Sigma$$ in the spacetime.

This has some proven monotonicity (for time-symmetric initial data (ID)), and asymptotics (for asymptotically flat ID, will approach ADM mass of the ID), but is proven to lack positivity under certain conditions, as well as rigidity for $\mathbb R^3$. As for easy of computation, it's composed of quantities we have access to throughout a simulation, and so we could feasibly compute it. However, it's too negative, as Wang put in a recent talk.

Bartnik Mass

It's defined in terms of an infimum among all admissible extensions - not easy to use for numerical computations.

Brown-York Mass

\[m_BY(\Sigma) = \dfrac{1}{8\pi} \int_{\Sigma} (H_0 - H) d\Sigma \]

where $$\Sigma$$ bounds a spacelike hypersurface $$\Omega$$, and $$H_0$$ is the mean curvature of the unique isometric embedding of $$\Sigma$$ into $$\mathbb R^3$$ and $$H$$ is the mean curvature of $$\Sigma$$ in $$\Omega$$.

This satisfies positivity in certain cases,  and some asymptotics, but it's also gauge dependent. One can, however, replace $\Omega$ with the entire spacetime to obtain a gauge independent quantity (the Liu-Yau mass). This, however, lacks the rigidity property. It's also too positive, as Wang put in a recent talk. As for ease of computation, since we work with spacelike hypersurfaces in NR, this would be a feasible, if not natural, mass to compute - the isometric embedding would involve some numerical use of Newton's method. 

Wang-Yau Energy

Warning, notation soup up ahead. For the Wang-Yau Energy, we compute the minimum over all choices of $$(X, T_0)$$ of

\[E(\Sigma, X, T_0) = \int_{\hat \Sigma} \hat H d \hat \Sigma - \int_\Sigma \left(\sqrt{1 + |\nabla \tau |^2}  \cosh \theta |H| - \nabla \tau \cdot \nabla \theta - \alpha_H (\nabla \tau) \right) d \Sigma \]

where $$\Sigma$$ is a spacelike 2-surface in the spacetime, with spacelike mean curvature vector $$H$$, $$\sigma$$ is the induced metric on $$\Sigma$$, $$\theta$$ is a function of $$|H|$$ and $$\tau$$, $$\Delta$$ and $$\nabla$$ are the gradient of Laplacian wrt $$\sigma$$, the induced 2-metric, $$\alpha_H$$ is the connection one form of the normal bundle wrt $$H$$. For an isometric embedding $$X: \Sigma \to \mathbb R^{3,1}$$, and a future timelike unit vector $$T_0$$, we can consider the projected embedding $$\hat X$$ into the orthogonal complement of $$T_0$$, from which we get the quantities with the hats. Finally, $$\tau = -X \cdot T_0$$ is the time function.

In order to find the isometric embedding that minimizes the quasi-local energy, we would have to solve a nonlinear elliptic equation - the Euler-Lagrange equation for the critical point of the energy as a function of $$\tau$$ (given in the paper, not reproduced here).

We do, however, have the numerical technology for this! And, unless I'm mistaken, we have access to all of these quantities in our simulations - though we have one more non-linear PDE to solve. Moreover, this definition of quasi-local mass satisfies the rigidity property, and certain positivity properties. 

***

Thus, we have 3 quantities for the quasi-local mass that we can use for new physics computations. It would be cool to look at all of these (first for Schwarzschild and Kerr) for BH binary simulations :)


*Note that because of the equivalence principle and lack of symmetry in generic spacetimes, we can't formulate mass as a volume integral over mass density, as we would in Newtonian gravity. Note also that we can't take approaches meant for an isolated system, either - asymptotic masses are defined in regions where gravity is weak on the boundary, but for a quasi-local mass, gravity could be strong on the boundary.

Tuesday, February 2, 2016

Gravitational Waves from a Supermassive Black Hole Binary in M87?

This paper showed up on the arXiv last week: Gravitational Waves from a Supermassive Black Hole Binary in M87 (Yonemaru et al.). The authors posit that the supermassive black hole (SMBH) hosting AGN activity near the core of the giant elliptical galaxy M87 (I'll call this the primary black hole) may be in a binary with another large black hole. If this is the case, since M87 is less than 20 megaparsecs away, it may be an enticing source of gravitational waves detectable by pulsar timing arrays (PTAs).

The argument that the primary black hole is in a binary is based on the AGN activity originating approximately a parsec from the centroid of the galaxy. It is feasible that this offset is caused by the perturbative influence of other massive black holes in the galactic core, but it could also be because of a kick from a previous black hole merger or a rocket like thrust from an asymmetric relativistic jet.

If you suppose that the offset is because the primary black hole is in a binary, it has to be a pretty wide, long-period orbit to account for the scale of the offset---like a 2000 year orbital period. This would lead to a 1000 year gravitational wave period which is much, much longer than the 10-year spans of modern PTA data sets. With this in mind, the authors make the perfectly reasonable approximation that any gravitational waves from this hypothetical system can be well approximated as having a linear dependence on time. They then go about making estimates for the scale of the time derivative of the gravitational waves ($\dot{h}$) from this system for a variety of semi-plausible companion parameters (see the figure below).

Figure 3 from Yonemaru et al.

What would a linearly changing gravitational wave look like in a PTA experiment? For every pulsar you're observing, the gravitational wave would cause the apparent rotational frequency of the pulsar to drift linearly in time. The rate of the drift would be proportional to $\dot{h}$ and some projection factors that vary from pulsar to pulsar. If this drift were to go on for the entire 10-year span of the PTA experiment, the rotational frequency of some pulsars could change by as much as a part in $10^{16}$ or $10^{15}$ (see the right axis of the figure). The magnitude of this change is right in the ballpark of what PTA papers typically say are needed to make a detection. Sounds good.

But, there is a major flaw with all of this. Pulsar rotational frequencies slowly change all by themselves without the intervention of gravitational waves. In fact, shortly after the discovery of pulsars, Tommy Gold predicted that the rotational frequency of pulsars should decrease slowly in time. It was an important prediction that, when confirmed, helped support the idea that pulsars were rapidly rotating highly magnetized compact objects. Unfortunately, there is no way to assess the rate of change of a pulsars rotational frequency a priori, so to do high-precision pulsar timing, we fit out linear (and sometimes higher order) trends in the rotational frequency. There is no way for the linear-in-time gravitational waves these authors discuss to ever be differentiated from perfectly vanilla pulsar behavior. It's a really basic fact of pulsar timing. As a pretty solid rule of thumb, never trust a claim that PTAs can detect gravitational waves with frequencies well below the inverse of the PTA's data span.




Monday, February 1, 2016

That's no moon

Again this is a little off topic but I think that there are some nice cool elements here and a couple of points to be made about common misconceptions.

I just came across this tweet with this really amazing animated gif,


One can find the gif itself at this link. The pictures are taken by the NASA satellite DSCOVR (Deep Space Climate Observatory), which is observing the Earth from the L1 point, one of the Lagrange equilibrium points of the circular restricted three body problem of the Sun-Earth system. This 3 body problem consists of two massive main bodies circling around their common centre of mass, with the one of the two being more massive, and an additional third much smaller body that doesn't affect the motion of the two larger ones. One can define an effective potential for the motion of the smaller body in the frame that rotates with the two larger bodies, and from that effective potential identify equilibrium points. The following figure shows contours of that potential for different values of the energy of the third body.


The three crossing points along the horizontal axis in the
middle are from left to right the L3, L1, and L2 points.


In the case of the Sun-Earth system, the Sun would be at the centre of the left set of (larger) circles, while the Earth would be at the centre of the right (smaller) set of circles. You could also imagine one of the inner circles on the right to represent the orbit of the Moon (of course the picture is nowhere near the right scale). What we see therefore in the gif is what the satellite sees from that cross point in the middle between the Sun and the Earth.

There are some really interesting things about this sequence of photographs. One is pointed out and answered in the comments of the original tweet.



It is crazy, but it really looks sort of fake. And it is true that we are not used to images without an atmosphere and the sharpness seems unreal (the edge of the Earth looks much more fuzzy). I wonder if this is really the reason why it looks so unusual (I am no expert so I can't tell).

The other thing is that the moon that we are seeing in the pictures is not the Moon that we are used to, as someone else points out (probably trolling)



This is the familiar, to everyone on Earth, view of the Moon. Of course, someone from the far side would never see that, since the Moon is tidally locked in its orbit around the Earth and rotates around its axis with the same frequency as it rotates around the Earth, therefore presenting to us only one face. Which means that the DSCOVR satellite from its vantage point will always show us in such pictures the side of the Moon we never see from here. And it will always be lit like this, since the Sun is right behind the satellite providing this full illumination (there is no dark side of the Moon, except for this).

Finally, let's end with a meme



Which of course is a Star Wars reference...



Cheers