Wednesday, November 19, 2014

Dark Matter Decaying into Dark Energy

Update Aug 14 2015: (I found a paper written in April 2015 that models dark matter decaying into relativistic matter...such as light neutrinos. There are some tight constraints on this model.)


I was quite excited to see today that IOP PhysicsWorld had an article today on Dark Matter decaying into Dark Energy. The article discusses a recently accepted paper by Salvatelli et al. in PHYSICAL REVIEW LETTERS.

The gist of this recent PRL paper by Salvatelli et al is the following:  the tension between Planck's CMB data using a LambdaCDM model and many other data sources, such as Ho (Hubble constant at z=0) measurements by...you guessed it...the Hubble Space Telescope, can be resolved in a model in which dark matter decays into dark energy (but only when this interaction occurs after a redshift value of 0.9.) There has been a major problem reconciling the low value of Ho estimated by Planck's CMB data (Ho = 67.3 +/- 1.2)  with the much higher value measured by the Hubble Space Telescope (Ho = 73.8 +/- 2.4 .)

However, when using a model in which dark matter can decay into dark energy, and when using RSD data on the fluctuations of matter density (as a function of the redshift, z), then the Planck estimate of the Hubble constant at z=0 becomes Ho = 68.0 +/- 2.3. This new model eases the tension between the Planck data and the Hubble Space Telescope measurement of Ho.


So, let's go into the details of the model:
(1) Dark matter can decay into dark energy (or vice versa is also possible in the model)
(2) The interaction between dark matter and dark energy is labeled 'q' in their model. When 'q' is negative, then this means that dark matter can decay in dark energy. When 'q' is positive, then this means that dark energy can decay in dark matter.  And when  'q' is zero, then this is no interaction.
(3) The group has binned 'q' into a constant value over different periods of time.
Bin#1 is 2.5 <  z  < primordial epoch  (in other words, from the Big Bang until ~5 billion years after the Big Bang)
Bin#2 is 0.9  <  z  < 2.5  (in other words, from  ~5 billion years after the Big Bang to )
Bin#3 is 0.3  <  z  < 0.9
Bin#4 is 0.0  <  z  < 0.3   (i.e. most recent history)

The best fit values of these parameters are the following:  (See Table I and Fig 1 of their paper for the actual values)
q1 = -0.1 +/- 0.4   (in other words, q1 is well within 1 sigma away from zero)
q2 = -0.3 +0.25 - 0.1 (in other words, q2 is only roughly 1 sigma away from zero)
q3 = -0.5 +0.3 - 0.16 (in other words, q3 is roughly 2 sigma away from zero)
q4 = -0.9 +0.5 - 0.3 (in other words, q3 is roughly 2 sigma away from zero)

There is a trend that q(z) becomes more negative as z gets closer to its value today of z=0.

Thursday, November 6, 2014

Gravity alone does not explain the Arrow of Time

Not sure if you all have seen the recent article by Julian Barbour about an arrow of time arising from a purely gravitational system. If not, check out the following articles in Physics or Wired.
First off, the title of the articles contradict the substance of the articles.
Julian Barbour has shown that a system of 1000 objects interacting only via gravity can start dispersed, then clump together, and then disperse again. That's it. This is not exciting work. This was a similar problem to one that I was assigned in a freshman level computer programming class...just with ~100 objects rather than 1000 particles.

Second, Julian Barbour has shown that there is no arrow of time of for such systems, i.e. there is no way to tell the future from the past. (This is very different  than let's say 'life', which only runs in one direction. You are born, you remember the past, and you eventually die.)

As such, Julian Barbour has re-proven something that has been known for quite awhile:  In a system of particles that only interact via gravity, there is no arrow of time.

How can scientists and journalists mess this one up so badly?  Thoughts?