Sunday, February 7, 2016

Prescriptions for measuring and transporting local angular momenta in general relativity

I, along with my collaborators David Nichols, Justin Vines, and Éanna Flanagan, posted our latest preprint, Prescriptions for measuring and transporting local angular momenta in general relativity, on the arXiv tonight. After I tweeted the link, George informed me that I was now obligated to blog about it:

Ok, so here goes a little self-promotion!

This paper is a refinement on David and Éanna's earlier paper. The topic at hand is angular momentum in general relativity.

So, how is angular momentum defined in GR? One quantity that everyone agrees upon is the ADM angular momentum, which is precisely the Noether charge associated with the rotational symmetry of asymptotically flat spacetime. Masha recently touched upon the ADM mass in this post.

Angular momentum is related to a symmetry, so of course we must care about it. From Emmy Noether's theorem, we know that this total angular momentum is conserved. In a sense we can say that the ADM angular momentum (and mass) "live at $$i^0$$," the point at spacelike infinity.


In this paper, we were studying a different type of angular momentum. Instead of living at $$i^0$$, we are interested in a quantity near $$\mathcal{I}^+$$ (pronounced "scri plus"), the region of future null infinity. Here we're talking in the language of conformally compactifying an asymptotically flat spacetime, so we can squeeze the whole thing into a Penrose diagram like this one. Every spacetime which is asymptotically flat has the same structure "far away" from all the curvy bits of spacetime.

Why do we want to study angular momentum near scri? One reason: it's very natural to think of angular momentum decreasing when e.g. gravitational radiation carries it away from a system—like a black hole binary, allowing the two black holes to eventually merge! But the ADM angular momentum is conserved, so what gives? To understand this, you have recognize that the ADM quantity comes from integrating along a "Cauchy surface," a hypersurface that's everywhere spacelike and makes it out to $$i^0$$, like any of the $$\Sigma_{1,2,3}$$ in the Penrose diagram.

Spacelike infinity ($$i^0$$) is not the right setting to discuss things like how much angular momentum is carried away by gravitational waves, because the ADM angular momentum can't change! Instead of talking about quantities that "live at" $$i^0$$, we want to talk about quantities that "live at" $$\mathcal{I}^+$$.

Future null infinity ($$\mathcal{I}^+$$) is much more complicated than $$i^0$$. Spacelike infinity is kind of "rigid," while $$\mathcal{I}^+$$ is comparatively "floppy," though less floppy than the interior (bulk) of the spacetime. These statements can be made mathematically precise in terms of the symmetry groups of these spaces. The symmetry group of $$\mathcal{I}^+$$ is the famous BMS group, named for Bondi, van der Burg, Metzner, and Sachs, who studied its structure in the 60s. The BMS group has enjoyed a renewed interest in recent years, when it was discovered that BMS symmetries, Weinbergesque soft theorems, and long-ranged "memories" are different faces of the same underlying physics.

So, if you want to discuss angular momentum at $$\mathcal{I}^+$$, the BMS group is going to tell you the mathematical rules you have to follow (technically: quantities must "live in" representations of the group). For a while, the literature referred to something called the BMS angular momentum "ambiguity." However, ambiguity is not really the right word. Angular momentum is not ambiguous, it just transforms in a much more complicated way (under BMS transformations) than angular momentum does in flat (Minkowski) spacetime).

Ok, enough with the longwinded background. What did we actually do in our paper?

First of all, we worked in a simplified setting: when the region of interest of $$\mathcal{I}^+$$ is approximately stationary. In this setting, angular momentum has a much simpler transformation law when you move from point to point than in the fully dynamical setting.

Let's consider what happens when you expand around stationary $$\mathcal{I}^+$$ in powers of $$1/r$$. To leading order, the only property of spacetime is the Bondi mass. When spacetime is stationary, this is constant, and every stationary asymptotically flat vacuum spacetime is identical to Schwarzschild expanded to this order. When you go to next-to-leading order, you learn one additional property of the spacetime, which is like an angular momentum of the spacetime. Here is a crucial fact we make use of: every stationary asymptotically flat spacetime vacuum is identical to the Kerr spacetime expanded to this order.

Now, in the Kerr spacetime, it just so happens that there's a pair of tensors $$(\xi^a,{}^{*}\!f^{ab})$$ which satisfy the pair of differential equations:
\[
\nabla_a \xi^b &=  -\frac{1}{4} R^{b}{}_{acd} {}^{*}\!f^{cd} \, , \\
\nabla_a {}^{*}\!f^{bc} &= -2 \xi^{[b} {\delta^{c]}}_a \, .
\]
We can interpret these equations as a rule for how to transport a linear momentum $$\xi$$ and an angular momentum $${}^{*}\!f$$ from point to point in a consistent fashion (the quantity $${}^{*}\!f$$ is not really an angular momentum, but that's another topic).

So here's hope that a bunch of observers hanging out in spacetime can locally measure what they think are the linear momentum $$P^a$$ and angular momentum $$J^{ab}$$ of the spacetime about them. And moreover, if Alice measures her quantities $$(P,J)_A$$, and Bob measures his quantities $$(P,J)_B$$; and then Bob drags his quantities over to Alice in a certain well-defined way, then their two quantities will agree, up to the required accuracy!

David Nichols has laid out the 8-step procedure for locally measuring $$(P,J)$$ up to the required accuracy, which is really impressive to me.

My contribution to the calculation was the fiber bundle approach, which really makes it quite easy to compute the holonomy of this transport law, and makes clearer what are the necessary and sufficient conditions for the existence of consistent solutions to the transport differential equations. The fiber bundle idea is represented pictorially in the first image at the top of this post.

Read the paper for all the gory details! It's only 9 pages long. Satisfaction guaranteed or your money back.

Thursday, February 4, 2016

Quasi-Local Mass in General Relativity (for Numerical Relativists)

How would we, in general relativity, define conserved quantities such as energy for a finitely extended region of spacetime? We know how to handle mass and energy asymptotically, of course, but there's currently no agreed upon notion of quasi-local mass/energy (QLM/E)*

Such a quantity would be nice to have because statements in GR about gravitational collapse, such as the hoop conjecture, are concerned with mass content in a finite region of space. Likewise, in numerical relativity, where we work in a finite domain, we could compute such a quantity throughout a simulation for a highly dynamical binary system, and extract new physical information.

We thus look at Po-Ning Chen and Mu-Tau Wang's recent paper, which in part reviews several quasi-local mass quantities, and see which of these would be useful for numerical calculations.

First, we would like our definition of QLM/E to satisfy the following properties

  1. Rigidity: QLM = 0 in Minkowski space
  2. Positivity: QLE ≥ 0 under the dominant energy condition
  3. Asymptotics: The QLM should reduce to the definitions of asymptotic (ADM, Bondi masses) and local masses (Bel-Robinson tensor, matter density), in the large sphere and small sphere limits, respectively. 
  4. Monotonicity: The QLM should increase when we would expect it to increase (though it doesn't have to be strictly additive because of negative gravitational binding energy). 
These are mathematical properties that we would like to satisfy; an additional property for the  numerical relativists among us could be 

     0. Nice to compute in simulations

Presently, we don't have a quasi-local mass definition that is proven to satisfy all of the mathematical properties, but let's walk through the definitions that we do have, and talk about the (dis)advtanges of each.

***

Hawking Mass

\[m_H(\Sigma) = \sqrt{\dfrac{|\Sigma|}{16\pi}}\left(1 - \dfrac{1}{16\pi} \int_{\Sigma} |H|^2 d\Sigma \right) \]
where $$\Sigma$$ is a spacelike 2-surface (the boundary of our region) and $$H$$ is the mean curvature (trace of the extrinsic curvature) vector of $$\Sigma$$ in the spacetime.

This has some proven monotonicity (for time-symmetric initial data (ID)), and asymptotics (for asymptotically flat ID, will approach ADM mass of the ID), but is proven to lack positivity under certain conditions, as well as rigidity for $\mathbb R^3$. As for easy of computation, it's composed of quantities we have access to throughout a simulation, and so we could feasibly compute it. However, it's too negative, as Wang put in a recent talk.

Bartnik Mass

It's defined in terms of an infimum among all admissible extensions - not easy to use for numerical computations.

Brown-York Mass

\[m_BY(\Sigma) = \dfrac{1}{8\pi} \int_{\Sigma} (H_0 - H) d\Sigma \]

where $$\Sigma$$ bounds a spacelike hypersurface $$\Omega$$, and $$H_0$$ is the mean curvature of the unique isometric embedding of $$\Sigma$$ into $$\mathbb R^3$$ and $$H$$ is the mean curvature of $$\Sigma$$ in $$\Omega$$.

This satisfies positivity in certain cases,  and some asymptotics, but it's also gauge dependent. One can, however, replace $\Omega$ with the entire spacetime to obtain a gauge independent quantity (the Liu-Yau mass). This, however, lacks the rigidity property. It's also too positive, as Wang put in a recent talk. As for ease of computation, since we work with spacelike hypersurfaces in NR, this would be a feasible, if not natural, mass to compute - the isometric embedding would involve some numerical use of Newton's method. 

Wang-Yau Energy

Warning, notation soup up ahead. For the Wang-Yau Energy, we compute the minimum over all choices of $$(X, T_0)$$ of

\[E(\Sigma, X, T_0) = \int_{\hat \Sigma} \hat H d \hat \Sigma - \int_\Sigma \left(\sqrt{1 + |\nabla \tau |^2}  \cosh \theta |H| - \nabla \tau \cdot \nabla \theta - \alpha_H (\nabla \tau) \right) d \Sigma \]

where $$\Sigma$$ is a spacelike 2-surface in the spacetime, with spacelike mean curvature vector $$H$$, $$\sigma$$ is the induced metric on $$\Sigma$$, $$\theta$$ is a function of $$|H|$$ and $$\tau$$, $$\Delta$$ and $$\nabla$$ are the gradient of Laplacian wrt $$\sigma$$, the induced 2-metric, $$\alpha_H$$ is the connection one form of the normal bundle wrt $$H$$. For an isometric embedding $$X: \Sigma \to \mathbb R^{3,1}$$, and a future timelike unit vector $$T_0$$, we can consider the projected embedding $$\hat X$$ into the orthogonal complement of $$T_0$$, from which we get the quantities with the hats. Finally, $$\tau = -X \cdot T_0$$ is the time function.

In order to find the isometric embedding that minimizes the quasi-local energy, we would have to solve a nonlinear elliptic equation - the Euler-Lagrange equation for the critical point of the energy as a function of $$\tau$$ (given in the paper, not reproduced here).

We do, however, have the numerical technology for this! And, unless I'm mistaken, we have access to all of these quantities in our simulations - though we have one more non-linear PDE to solve. Moreover, this definition of quasi-local mass satisfies the rigidity property, and certain positivity properties. 

***

Thus, we have 3 quantities for the quasi-local mass that we can use for new physics computations. It would be cool to look at all of these (first for Schwarzschild and Kerr) for BH binary simulations :)


*Note that because of the equivalence principle and lack of symmetry in generic spacetimes, we can't formulate mass as a volume integral over mass density, as we would in Newtonian gravity. Note also that we can't take approaches meant for an isolated system, either - asymptotic masses are defined in regions where gravity is weak on the boundary, but for a quasi-local mass, gravity could be strong on the boundary.

Tuesday, February 2, 2016

Gravitational Waves from a Supermassive Black Hole Binary in M87?

This paper showed up on the arXiv last week: Gravitational Waves from a Supermassive Black Hole Binary in M87 (Yonemaru et al.). The authors posit that the supermassive black hole (SMBH) hosting AGN activity near the core of the giant elliptical galaxy M87 (I'll call this the primary black hole) may be in a binary with another large black hole. If this is the case, since M87 is less than 20 megaparsecs away, it may be an enticing source of gravitational waves detectable by pulsar timing arrays (PTAs).

The argument that the primary black hole is in a binary is based on the AGN activity originating approximately a parsec from the centroid of the galaxy. It is feasible that this offset is caused by the perturbative influence of other massive black holes in the galactic core, but it could also be because of a kick from a previous black hole merger or a rocket like thrust from an asymmetric relativistic jet.

If you suppose that the offset is because the primary black hole is in a binary, it has to be a pretty wide, long-period orbit to account for the scale of the offset---like a 2000 year orbital period. This would lead to a 1000 year gravitational wave period which is much, much longer than the 10-year spans of modern PTA data sets. With this in mind, the authors make the perfectly reasonable approximation that any gravitational waves from this hypothetical system can be well approximated as having a linear dependence on time. They then go about making estimates for the scale of the time derivative of the gravitational waves ($\dot{h}$) from this system for a variety of semi-plausible companion parameters (see the figure below).

Figure 3 from Yonemaru et al.

What would a linearly changing gravitational wave look like in a PTA experiment? For every pulsar you're observing, the gravitational wave would cause the apparent rotational frequency of the pulsar to drift linearly in time. The rate of the drift would be proportional to $\dot{h}$ and some projection factors that vary from pulsar to pulsar. If this drift were to go on for the entire 10-year span of the PTA experiment, the rotational frequency of some pulsars could change by as much as a part in $10^{16}$ or $10^{15}$ (see the right axis of the figure). The magnitude of this change is right in the ballpark of what PTA papers typically say are needed to make a detection. Sounds good.

But, there is a major flaw with all of this. Pulsar rotational frequencies slowly change all by themselves without the intervention of gravitational waves. In fact, shortly after the discovery of pulsars, Tommy Gold predicted that the rotational frequency of pulsars should decrease slowly in time. It was an important prediction that, when confirmed, helped support the idea that pulsars were rapidly rotating highly magnetized compact objects. Unfortunately, there is no way to assess the rate of change of a pulsars rotational frequency a priori, so to do high-precision pulsar timing, we fit out linear (and sometimes higher order) trends in the rotational frequency. There is no way for the linear-in-time gravitational waves these authors discuss to ever be differentiated from perfectly vanilla pulsar behavior. It's a really basic fact of pulsar timing. As a pretty solid rule of thumb, never trust a claim that PTAs can detect gravitational waves with frequencies well below the inverse of the PTA's data span.




Monday, February 1, 2016

That's no moon

Again this is a little off topic but I think that there are some nice cool elements here and a couple of points to be made about common misconceptions.

I just came across this tweet with this really amazing animated gif,


One can find the gif itself at this link. The pictures are taken by the NASA satellite DSCOVR (Deep Space Climate Observatory), which is observing the Earth from the L1 point, one of the Lagrange equilibrium points of the circular restricted three body problem of the Sun-Earth system. This 3 body problem consists of two massive main bodies circling around their common centre of mass, with the one of the two being more massive, and an additional third much smaller body that doesn't affect the motion of the two larger ones. One can define an effective potential for the motion of the smaller body in the frame that rotates with the two larger bodies, and from that effective potential identify equilibrium points. The following figure shows contours of that potential for different values of the energy of the third body.


The three crossing points along the horizontal axis in the
middle are from left to right the L3, L1, and L2 points.


In the case of the Sun-Earth system, the Sun would be at the centre of the left set of (larger) circles, while the Earth would be at the centre of the right (smaller) set of circles. You could also imagine one of the inner circles on the right to represent the orbit of the Moon (of course the picture is nowhere near the right scale). What we see therefore in the gif is what the satellite sees from that cross point in the middle between the Sun and the Earth.

There are some really interesting things about this sequence of photographs. One is pointed out and answered in the comments of the original tweet.



It is crazy, but it really looks sort of fake. And it is true that we are not used to images without an atmosphere and the sharpness seems unreal (the edge of the Earth looks much more fuzzy). I wonder if this is really the reason why it looks so unusual (I am no expert so I can't tell).

The other thing is that the moon that we are seeing in the pictures is not the Moon that we are used to, as someone else points out (probably trolling)



This is the familiar, to everyone on Earth, view of the Moon. Of course, someone from the far side would never see that, since the Moon is tidally locked in its orbit around the Earth and rotates around its axis with the same frequency as it rotates around the Earth, therefore presenting to us only one face. Which means that the DSCOVR satellite from its vantage point will always show us in such pictures the side of the Moon we never see from here. And it will always be lit like this, since the Sun is right behind the satellite providing this full illumination (there is no dark side of the Moon, except for this).

Finally, let's end with a meme



Which of course is a Star Wars reference...



Cheers

Friday, January 29, 2016

A Fat Dead Star

Hello all! My name is Dusty Madison. I do research on pulsars, specifically how they can be used to detect extremely low-frequency gravitational waves from things like supermassive black hole binaries. I hope to use this space to talk about neat new work on pulsars, gravitational waves, and any other astronomical topic I can trick enough people into thinking I'm marginally qualified to discuss.

In this first post of mine, I want to talk about a paper that's a few years old now that many of you are possibly already familiar with: A two-solar-mass neutron star measured using Shapiro delay, or Demorest et al. (2010). This paper came out just after I started working on pulsar timing. I have always found it to be a crazy beautiful paper with profound scientific implications that really deftly shows how damn cool pulsar science can be. It's a modern classic in my field (as evidenced by the 1000+ citations). I think discussing it will be a good warm up for me. Full disclosure: I know most of the authors of this paper personally through my work in NANOGrav and the IPTA, but I didn't know them when I first read this paper and realized how cool it was.

So the reason this paper is awesome is because the subject is a truly remarkable millisecond pulsar/white dwarf binary system. The pulsar has a rotational period of 3.1508076534271(6) milliseconds. The orbital period of the system is 8.6866194196(2) days. Pulsar papers are neat places where you'll see this many significant digits and you can take them seriously. The eccentricity of the orbit is 0.00000130(4)---a nearly perfect circle. What really enables the extra-special science you can do with this system though is the inclination angle is 89.17(2) degrees. This system is almost perfectly edge-on to the line of sight making it ideal for measuring the Shapiro delay.

From Figure 1 in Demorest et al. (2010)

The Shapiro delay is a general relativistic delay that the clock-like pulsations of radio waves from the pulsar experience as they traverse the gravitational potential of the white dwarf companion. Schematically, the pulses have to fall into the gravitational well and climb back out before getting to Earth and this takes a little bit of time. How big the delay is depends on the geometrical configuration of the orbit (it's biggest when the white dwarf lies between the pulsar and the Earth) and the mass of the companion (bigger mass, bigger delay).

With this nearly edge-on binary, if you carefully monitor the Shapiro delay throughout the orbital period, you can very precisely measure the white dwarf mass. Once you know the companion mass to high precision, you can directly infer the neutron star mass. Demorest et al. did this. The figure above shows the Shapiro delay they measured (in microseconds) as a function of orbital phase. The white dwarf mass they infer from the shape of this Shapiro delay curve is 0.500(6) solar masses. This yields a neutron star mass of 1.97(4) solar masses.

This was the largest high-precision mass of a neutron star ever measured and it was a big deal. It was a big deal because general relativity dictates that neutron stars have an absolute maximum mass (see TOV limit) and what that maximum mass is depends on the equation of state of nuclear matter at super-nuclear densities. Now, it's nearly impossible to learn anything in a laboratory about the behavior of matter at super-nuclear densities, especially in the degenerate conditions of a neutron star, so measurements of the sort done by Demorest et al. are essentially the only way to gain insight into the nuclear physics in these regimes of parameter space.

Figure 3 from Demorest et al. (2010)
In the figure on the left, Demorest et al. depict the mass-radius relations anticipated for a wide variety of theoretical equations of state. The red horizontal bar represents the new lower-bound on the maximum neutron star mass that this neutron star (code name J1614-2230) yielded. Many of the proposed equations of state say you can have a neutron star even heavier than 2 solar masses. They are safe. However, many of the curves predict maximum neutron star masses well below 2 solar masses. This single mass measurement basically kills those theories. I love it.

Sadly, many of those equations of state that can't support a 2 solar mass neutron star predict really cool exotic phases of matter in neutron star interiors--things like pion condensates, equilibrium populations of hyperons, or deconfined quark matter. Now, these things can't be completely ruled out. Equations of state have tunable parameters that can be tweaked a bit so that they yield maximum neutron star masses compatible with the 2 solar mass neutron star. But the fact is, the 2 solar mass neutron star painted some of these equations of state into a corner and spurred a lot of activity in nuclear theory circles. I think that's awesome.






The Symplectic Integrator (pt 1): Traditional vs Variational Integrators

To start us off, this is more of an in-depth research post, something I had been meaning to write up for a while, describing some recent work that Leo and I have done with our friend Chad Galley at Caltech, and Alec Turner, an undergraduate student I worked with at McGill University.

This work stemmed out of previous work that Chad, Leo and I had done together extending Chad's gorgeous work on nonconservative actions. I won't discuss the nonconservative action formalism in this post, but I will provide a brief summary next week in part 2 in order to describe our use of it for the "Slimplectic" Integrator.

Traditional vs Symplectic Integration

Long-term numerical integration of physical systems are extremely important when studying the stability and evolution of planetary systems, or other astrophysical N-body systems. Traditional integration methods, such as explicit Euler or classical Runge-Kutta schemes, are unstable over thousands or millions of dynamical times, leading to errors in the constants of motion, e.g. energy or momenta, that tend to grow linearly with time. 

In a classic paper of computational astrophysics, Jack Wisdom and Matt Holman described a "symplectic map" for N-body dynamics which allowed for long-term integrations of (conservative) N-body orbits to occur, while preserving physical properties like the constants of motion. Symplectic integrators can be built by partitioning a system's Hamiltonian and then applying the BCH theorem in order to derive mappings that approximate the motion up to a given order.

Traditional integrators, which can be built to minimize the local "error" in the integration-step, can perform comparatively better over short integrations, however in the long term, the non-linear error resulting from the drifting of "constants" of motion make them far worse than symplectic methods.

Solar system evolution stability using first-order traditional and variational (symplectic) integrators. (Hairer, Lubich & Wanner 2006)

Plots of the energy and angular momentum drift from another set of integrations comparing a 4th order symplectic integrator (SI4) to 4th order Runge-Kutta (RK4) taken from Kinoshita (1991).

In order to understand why methods like symplectic integrators are able to preserve the physical constants of motion so well it is helpful to utilize the framework of Variational Integrators and Noether's Theorem.

Noether's Theorem

Of course, since we are discussing the preservation of "constants" of motion, any explanation we have should make full use of Noether's theorem. It is one of the most powerful and beautiful theorems in mathematical physics, and describes the relationship between constants of motion and the symmetries of the system.

Noether's theorem is one of the most powerful in mathematical physics.
More precisely, Noether's theorem relates (differentiable) symmetries of the action that describes the system with quantities that will be conserved by the Euler-Lagrange equations that are derived by extremizing that action.

This means that if we can perform continuous transformations to the system that leave the action invariant, then there will be a constant of motion related to that symmetry which is preserved as the system evolves. Thus, time-translation symmetry leads to energy conservation, translational symmetry leads to momentum conservation, and rotational symmetry leads to angular momentum conservation.

For traditional integration methods, like Runge-Kutta, the focus is on approximating the equation of motion. The differential equation of motion is usually discretized into a form that can be implemented directly on a computer through, for example, finite differencing. Thus, even if the original equation of motion is the Euler-Lagrange equation stemming from some particular action, the actual discrete equation of motion that is solved by the computer does not come from the action, and will not know about its symmetries, so Noether's theorem does not apply. In essence this is why traditional integration methods are not able to prevent long-term drift of the so-called "constants" of motion.


Standard integration methods (e.g. Runge-Kutta) focus on discretizing the equations of motion, leading to a disconnect with the original action. This prevents Noether's theorem from applying, and, without the appropriately constant "constants" of motion, long-term instability occurs.

Variational Integration


A simple way to understand how these integrators work is through the framework of variational integrators (see e.g. the fantastic Marsden & West 2001). Symplectic integrators can generally be written (locally) as variational integrators, and I find this approach, championed by the late, great Jerry Marsden at Caltech, to be much more intuitive than the normal symplectic approach. 

Variational Integrators work by discretizing the action integral itself, rather than the equations of motion. By varying an action that is already discretized, discrete equations of motion are obtained which can be exactly numerically implemented, which are indeed the Euler-Lagrange equations of an action that is closely related to the physical one. 

But how do you discretize the action integral? Exactly how you would discretize any integral, through numerical quadrature! By applying well-known numerical quadratures of a given accuracy order, and then varying the discrete action integral, we can obtain discrete equations of motion that are also accurate up to the same order. 

For Variational Integrators, numerical quadrature rules applied to the action integral yield integrators of the same order after the discretized action is varied. Numerical quadrature is much easier to explain to people than the BCH theorem. 

As long as the discretized action possesses the same symmetries as the original action, the discrete Euler-Lagrange equations we obtain from them will be subject to Noether's theorem, conserving the appropriate quantities (in fact Noether's theorem also tells us how to find the form for the associated conserved momenta, which may be slightly different than in the physical system due to the discretization).

The variational integrator first discretizes the action before performing the variation to obtain the discrete equations of motion. As long as the discretized action possesses the same symmetry as the original action, the discrete Euler-Lagrange equations of motion know about an action, and will exactly preserve the appropriate constants of motion, via Noether's Theorem. 

There are a few wrinkles here, of course. By performing the discretization in time, we break the time-translation symmetry in an exponentially small way. This is the reason why symplectic/variational integrators do not precisely preserve energy in the same way they preserve the other momenta. However, it can be shown that the energy error is always bounded, resulting in energy error that oscillates around zero, with an envelope that depends on the resolution. 

Next week...

Of course, because variational integration or symplectic methods involve systems where an action or Hamiltonian can be specified, they are, by and large, restricted to conservative problems. Problems that involve friction, drag, tidal dissipation, or other nonconservative effects cannot take advantage of the traditional symplectic approach. 

Next week, in part 2 of this post, I will talk about how we used our new nonconservative action principle to develop a variational integration method that applies to general nonconservative systems. This "Slimplectic" Integrator* has all the long-term computational benefits of symplectic integrators, but it can be applied to nonconservative systems where regular symplectic integrators cannot be used. 


*I call it this because phase-space volume "slims" down for dissipative systems, but is constant for conservative ones.

Thursday, January 28, 2016

Welcome to the RemarXiv.

This is our new research blog for encouraging us to discuss and summarize interesting research results, new ideas, or interesting papers that we may come across. This includes both interesting new papers posted to the arXiv (primarily astro-ph or gr-qc) or even older papers that we might find particularly useful in our own research.



The purpose of this blog is help stimulate our own research efforts by forcing us to summarize, for our peers, the work we are doing, and the old or new results in the field. It is not meant to be exhaustive, or terribly in-depth. It is only meant to stimulate our own research and personal interest.

The Rules: Each of us should post at least once a week. This must be related to research, though it does not have to be scientific in content (for example, in the future I may post about the tools and software I am currently using for my scientific workflow). It can be a post summarizing/discussing a paper that we have read, be it old or new, or it can just be an interesting new idea that we would like to post for stimulating discussion. We may use this space to discuss previous results, or summarize some of our own work for a more general audience. 


Thus far, the RemarXivists are:

Dave Tsang: CTC Fellow at the University of Maryland. Dave works on astrophysical dynamics in a variety of contexts, from black hole accretion, to exoplanetary dynamics. His current research focuses on N-body and disk-planet interactions for exoplanets, and on neutron star physics during gravitational wave induced inspiral of compact binaries.





Leo C. Stein: Postdoctoral Researcher at Caltech. Leo's research interests are studying and testing general relativity and other theories of gravity from an astrophysical standpoint.  He has investigated how “almost-general-relativity” theories can affect gravitational observables. An important observation which would be able to distinguish between GR and almost-GR is the inspiral rate in a compact binary system, detected either through radio pulsar timing or directly with gravitational waves.




George Pappas: Postdoctoral Research Associate at Ole Miss. George is an expert in General Relativity, focusing on the strong field regime. He works on compact objects and the spacetime around them in General Relativity and in alternative theories of gravity






Dusty Madison: Jansky fellow at National Radio Astronomy Observatory (NRAO).  Dusty's research is on pulsars, specifically how they can be used to detect extremely low-frequency gravitational waves from things like supermassive black hole binaries. By pushing precision pulsar timing to its limits with world-class radio telescopes and instrumentation and continually improving data analysis techniques, the pulsar community is poised to detect extremely low-frequency gravitational waves within five to ten years and begin a new era in the study of black holes, gravity, and currently unknown astrophysical phenomena.


Maria (Masha) Okounkova: graduate student in physics at Caltech, Princeton '14 physics undergrad. Works in numerical relativity, currently working on simulating collapse to naked singularities in general relativity and binary black hole simulations in almost-general relativity (with Leo Stein). Advised by Yanbei Chen and Mark Scheel in the TAPIR group, member of SXS collaboration.