Monday, March 23, 2015

Concordance Cosmology? Not yet

The term "Concordance Cosmology" gets thrown a round a lot in the field of cosmology. So too does the term "Precision Cosmology."
However, I'm a little hesitant to use these terms when we don't know what is 95% of the matter/energy in the universe. Cosmologists use the term  "Precision Cosmology" to describe the fact they can use data from a number of data sets to constraint variables, such as the rest mass of neutrinos, the spacetime curvature of the universe, or the number of neutrino species. However, many of these constraints are are only valid when assuming a certain, rather ad hoc model.

In many respects, this Standard Model of Cosmology,  i.e. Lambda CDM, is a great starting point, and most people who use it as a starting point are fully aware of its weakness and eagerly await being able to find corrections to the model. The problem is that it's sometimes referred to as if it were one complete consistent model (or referred to as a complete model once there's this small tweak over here or over there.) However, LCDM is not consistent and is rather ad hoc. The goal of this post is to poke holes in the idea that there is a "Standard Model of Cosmology" in the same sense that there's a "Standard Model of Particle Physics." (Note that the SM of particle physics is much closer to being a standard model...with the big exception being the lack of understanding of neutrino physics, i.e. how heavy are neutrinos and is there CP violation in the neutrino sector?)

So, let's begin with the issues with the Standard Model of Cosmology:  i.e. Lambda CDM:

(1) There is no mechanism for making more matter than anti-matter in Standard Model of Cosmology. The LCDM model starts off with an initial difference between matter and anti-matter. The physics required to make more matter than anti-matter is not in the model, and this data set (i.e. the value of the baryon and lepton excess fractions) is excluded when doing "Precision Cosmology."

(2) Cold Dark Matter is thrown in ad hoc. The mass of the dark matter particle is not in the model...it's just assumed to be some >GeV rest mass particle made in between the electro-weak transition and neutrino decoupling from the charged particles. The mechanism for making the cold dark matter is not consistent with the Standard Model of Particle Physics. So, it's interesting that the "Standard Model of Cosmology" so easily throws out the much more well known "Standard Model of Particle Physics." This means that there is no "Standard Model of Cosmo-Particle Physics."
There's also the fact that Cold Dark Matter over-predicts the number of satellite galaxies and over predicts the amount of dark matter in the center of galaxies. But once again, this data set is conveniently excluded when doing "Precision Cosmology" and, worse, the mass of the 'cold dark matter particle' is not even a free variable that Planck or other cosmology groups include in the "Standard Model of Cosmology." There are ten's of free variables that Planck uses to fit their data, but unfortunately, the mass of the dark matter particle is not one of the free variables.

(3) Dark Energy is a constant added to Einstein's General Theory of Relativity, and as such, it is completely ad hoc. The beauty of Einstein's General Theory of Relativity was its simplicity. Adding a constant to the theory destroys part of the simplicity of the theory.
It also appears that, if Dark Energy is not just a constant, then it's not thermodynamically stable (for most values of Wo/Wa.) (See the following article
http://arxiv.org/pdf/1501.03491v1.pdf)
So, this element of the "Standard Model of Cosmology" is an ad hoc constant added to GR. And while it's true that dark energy could just be the energy density of the vacuum of space-time, the particular value favored by LambdaCDM is completely ad hoc. The energy density of space-time appears to be on the order of (2 meV)^4.  What's so special about 2 meV?


There are also problems with the "Standard Model of Cosmology" that could go away with more data collection (or perhaps go away when a different model is applied to the data):

(4) There is a ~2 sigma discrepancy between the Hubble constant as estimated by CMB (WMAP & Planck satellites) and by local measurement by HST (Hubble Space Telescope.) The Hubble Space Telescope measures a larger values of the Hubble constant than WMAP & Planck. Only sometimes does Planck uses this data set when determining the values of the free variables in the LCDM model, and often it just a "prior guess" rather than as actual data. It seems like there could be a better way for the Planck group to include measurements of the local Hubble constant into its data analysis.

H0 = 73.9 ± 2.7   km s−1Mpc−1  (LMC + MW   Local Measurements)
H0 =  67.27 ± 0.66  km s−1Mpc−1    (Planck TT,TE,EE+LowP)

(Update April 14 2015: It appears that some or all of this discrepancy may be due to an oversimplification that all Type 1a supernova are exactly the same. According to recent research from U.of Arizona, there appears to be at least 2 sub-classes within Type 1a supernova, and one class is more likely to occur in the past than it is to appear closer to us. This creates a distortion in which the Hubble constant as measured by the Hubble Space Telescope appears larger than it really is. The researchers have yet to quantify how much the sub-class distinquintion will affect the value of H0. What is clear is that this understanding of Type 1a supernova along with more precise measurements from certain supernova will allow us to decrease the error bars on the measurement of H0.
However, even if this discrepancy goes it away, we are still left with the nagging question: what is dark energy?)

(5) According to the 2015 Planck Data Set, the two values related to reionization (the optical depth during the dark era and the z-value when reionization instanesously occurs) have changed significantly compared with their values from the 2013 Planck Data set. The optical depth, tau, was lowered from 0.088 +/- 0.01 to 0.066 +/- 0.013. This is a pretty significant change. In fact, the 2013 Planck data was used by Dayal et al. 2015 to "rule out" warm dark matter candidates. However, using the 2015 Planck data, warm dark matter candidates now appear to be more favorable than cold dark matter candidates (which respect to these two variables.)

For example, in the figure below, I present the a graph from Dayal et al. 2015 with some orange boxes I made overlaid on top. The horizontal red line shows the optical depth during the dark era and the horizontal orange boxes is the 1 sigma uncertainty zone. The vertical orange box is the Planck 2015 1 sigma range for the value of the zre, which is the "near-instantaneous" time at which reionization occurs in the Planck LCDM model. This means that the optical depth should start reachign a plateau around this value of zre, However, the CDM model has reionization occurring at much earlier time periods (z ~ 15.)



As one can see in the plot above, the horizontal red line now seems to match best with the 2.25 keV or 3 keV lines (assuming that f_esc can't be less than 0.5), whereas the 2013 Planck data seems to match best with the CDM and 5keV lines. And for the same reasons, the instantaneous value of z_reionization now seems to fit best with  the 2.25 keV lines, and seems quite far off for the CDM and 5 keV lines. (Recent experimental data on LymanAlphaEmitters seems to suggest that reionization occurred rapidly between 5 < z < 8, which once again seems to disfavor CDM models in which reionization starts earlier than z= 10.)
[Update May 13th 2015:  See Figure 19 of Finkelstein et al. 2014, which suggests that the optical depth, tau, is 0.063+/-0.013. The 2015 Planck results are now consistent with this experimental data; however, ColdDarkMatter is not consistent with this value, provided that f_esc can't be less than ~0.4]

The main reason that I'm showing this figure is that it nails down my argument that we are not yet doing "Precision Cosmology" when important variables in the models are changing significantly between 2013 and 2015 when analyzing the data from the same satellite (we just have more data is available now.)
A secondary reason why I'm presenting this figure is to highlight that there is now even more evidence that dark matter has a rest mass on the range of 2-10 keV. For more evidence on why dark matter likely has a rest mass on the range of 2-10 keV, check out a previous post from last July. It should also be noted that a 7 keV resonantly-produced, non-thermal dark matter particle would act similar to a ~3 keV thermally-produced dark matter particles. (So, it is important to recognize that the modeling done above is only valid for thermally produced dark matter, but it might be somewhat useful in constraining non-thermally produced dark matter as well.) A crucial test for 2-10keV dark matter will be using the James Webb Space Telescope to determine when the first stars were born after the dark era. The lack of stars between 20 < z < 30 would be a good indicator of keV dark matter, and vice versa, finding lots of stars between 20 < z < 30 would be a good indicator of GeV dark matter. ( See Figures 4&5 of the following paper.) (Note: Updated April 14 2015)

But clearly, we shouldn't be emphasizing any one data set over another. Replacing warm dark matter for cold dark matter has an effect on ten's of data sets. Warm dark matter by itself is just a 'word.' What we should be testing is completely new models, not just adding new parameters to LCDM. We need to include the creation and perhaps decay of the dark matter in the cosmological model, and then test the model with all known cosmological data up to that time (and not ignore inconvenient data from the HST or from the measurement of galaxy rotation curves.) This is not being done by the large research organization like ESA/Planck or Harvard/Smithionian. Hopefully, this will change in the future.

My main argument in this post is that we should not be using the terms "Precision Cosmology" and "Standard Model of Cosmology" when there are so many problems with LCDM and when the values for parameters in the model change so significantly when only the addition of only a small amount of new data.

Let's just call it "The Incorrect, But Easy to Model LCDM" and  focus on time on actually finding new models, not just minor fixes to LCDM.

No comments:

Post a Comment