The purpose of this short blog post is to highlight that there are physicists working on predictive models to explain dark matter.

One model that I'd like to focus on is called the Type I Seesaw Model.

In this model, the masses of neutrinos comes both from the couplings to the Higgs field (Dirac) and from some self-coupling (Majorana.) The Majorana term causes the mass of the left and the right handed particles to separate. In fact, it causes the mass of the left handed particles (~ meV) to be much less than the mass of typical fermions (~ GeV) and it causes the mass of the right handed particles (~ 10^10 GeV) to be much less greater than the mass of any other fermion.

In this case, with a mass on the order of ten to the ten GeV, it will be extremely difficult to detect dark matter particles. But, according to Stephen F. King (and others), we may be able to infer the mass of the dark matter particles (i.e. the right handed neutrinos) from the oscillations of the left handed particles. (See a Jan 2017 by SF King titled Unified Models of Neutrinos, Flavourand CP Violation)

So far, there appears to be a great fit between the predictions of the LSB model (below right) and experimental results. (Note that LS stands for Littlest Seesaw model) Also, this same model is able to predict the baryon asymmetry of the universe. (This model is incomplete because it assumes that one of the neutrinos has a mass of zero. Hopefully, SF King will work on an updated model that does not make this assumption once there is more data available to help constrain the full model.)

So, I encourage more research into this field because it may be that the Standard Model of Physics just needs to be tweaked by adding right handed neutrinos. While the presence of right-handed neutrinos with large masses can fit the data quite well, I think that there's still a lot of questions that remain, such as: why is the Majorana mass term so large for right-handed neutrinos and effectively zero for left-handed neutrinos? Is nature really so lop-sided?

Before finishing, one question that I'd like to raise is the following:

Can the coupling of the Higgs field to neutrinos be measured at the LHC? (i.e. can be measure the Dirac terms in the neutrino mass matrix at the LHC? (And if so, it will be interesting to see if the Higgs field couples to neutrinos with a strength similar to other fermions. If this is the case and if we constrain the mass of left-handed neutrinos using cosmology and neutrino detectors, then we should be able to immediately derive the mass of the right-handed neutrinos.)

(Please write a comment below if you know the answer to this question.)

(((Post script: It is interesting also to point out that the Higgs field appears to be stable up to energies on the order of 10^10 GeV. The question is: how does the presence of right-handed neutrinos affect the Higgs field? What is the coupling strewth of the Higgs field with neutrinos and does this affect the stability of the Higgs field at large energy scales?)))

# Grow, Baby, Grow: Eddie's Blog on Energy & Physics

Expanding society one kW-hr at a time. Blogging on the economics of various electricity-generating technologies as well as their underlying physics.

## Tuesday, August 15, 2017

## Sunday, February 28, 2016

### Higgs inflation

The case for inflation is extremely strong. If you don't already understand how strong the case is, then I suggest watching the following lecture at MIT by Alan Guth. Alan Guth is the originator of the concept of inflation, and now he provides a fairly unbiased judge of the theory, since his original model for inflation has been effectively disproven (i.e. quantum tunneling from the false vacuum through a barrier to the real vacuum.)

The main evidence for inflation are the following:

(1) Horizon problem: the universe that we see is extremely similar even though there was no way for parts of the universe that we see today to have been in contact...unless the universe we see today came ultimately from a small region that had been in contact, and then got inflated to a much larger size.

(2) Flatness problem: Overall, there is virtually no curvature (k) in the universe. In a matter dominated universe, the overall curvature grows with time, so this means that the curvature when the temperature was >TeV must have been extremely small. The smallness of the curvature can be explained if our universe went through a period of inflation before the temperature dropped to ~TeV.

(3) Small, Adiabatic, Gaussian fluctuations in curvature: While the overall curvature of the universe is small or zero, there are local fluctuations in the curvature that end up canceling overall, i.e. while the curvature might be negative in some locations, it is positive in other locations so that the overall curvature is near zero. (Imagine living on a fixed point in Sahara desert and imagine that the farthest that you can see is 100 km in any direction. You would see local variations in the curvature, due to dune piles scattered throughout the desert. However, the overall curvature would be virtually zero. In other words, you would be forced to conclude that, if you did live on the surface of a sphere, then the radius of the sphere is much, much greater than 100 km.) In this analogy, the dune piles are the remnants of the initial fluctuations before inflation. The primordial fluctuations are random, uncorrelated, adiabatic and overall add-up to near-zero curvature.

These are the three main sources of evidence for inflation.

What's interesting is that the standard model of physics provides a natural candidate for the inflation field: the Higgs field. Like the inflation field, the Higgs field is a scalar field, in the sense that at every point in 4-D space-time, there is a value (a quaternion) for the Higgs field. Examples of day-to-day scalar fields are pressure/temperature/electric potential. These should be contrasted with a vector field, which has at every point in space-time both a magnitude and a direction. Examples of day-to-day vector fields are electric field, magnetic field, and wind velocity. At every point, the vector field has both a magnitude and a direction.

The Higgs field is a fundamental scalar field. Scientists are still working out the shape of the Higgs energy density as a function of the value of the Higgs fields. We /they know that the shape of the energy density vs. field value when the value of the field is close to the value of the field (v = 246 GeV) which is a local (and perhaps global) minimum in the energy density. (see inset in the figure below from the paper by Bezrukov and Shaposhnikov.) What is still uncertain is the shape of the curve far from v = 246 GeV. It is well known that the coupling of the Higgs field to particles (such as the top quark) causes the shape of the curve to look somewhat like the curve below. Depending on the coupling to the top quark and the self-coupling of the Higgs field, the potential can look like the one below, which goes to an asymptote. The potential below is also nearly exactly the shape of the potential needed to explain slow-roll inflation. It is called slow-roll because of the small gradient in the field at large values of the Higgs field.

Figure#1: Higgs energy density versus field value from the paper by Bezrukov and Shaposhnikov

titled " The Standard Model Higgs boson as a the inflaton."

Interestingly, the shape of the Higgs field in the case listed above is nearly the same as the primordial potential field that is measured experimentally. (By experimentally, I mean when combining data from WMAP/Planck/ACT/SPT/WeakLensing/LymanAlphaForest.) The primordial potential field was estimated by Hunt and Sarkar using the data list above and can be seen below:

Figure#2: Primordial potential of the inflaton field as estimated by Hunt and Sarkar

Below, I show the data from Hunt and Sarkar, but with the Higgs potential field imposed on top.

Figure#3: Primordial potential of the inflaton field as estimated by Hunt and Sarkar

with the Higgs field estimated by Bezrukov and Shaposhnikov imposed on top as a

What one can see in Figure#3 is that the experimental data for the primordial inflaton field can be well described so far by a Higgs-like scalar field. Also interesting is that the potential field takes a nose-dive at smaller and smaller scales. This nose-dive in power help to explain why there appears to be a lack of fluctuations at the scale of galaxies (i.e. 1-100 kpc.) (Perhaps, we don't need warm dark matter as a complete explanation for the lack of fluctuations at small scale???)

Because the shape and size of the Higgs scalar field is so nearly the same as the shape and size of the inflaton field required to create primordial fluctuation, it is my opinion the Higgs field is the inflaton field. To prove this conjecture will require further research by theorists into the exact shape of the Higgs field and further experimental data on the mass of the top quark and Higgs boson as well as experimental data on the primordial fluctuations at both large and small scales (the middle scale seems pretty much nailed down.) Theorists will need to calculate the running of the Higgs field at large energy scales by including more than 5 loops of self-coupling and coupling to bosons/fermions.

All-in-all, it appears that the Higgs field is the inflaton field and there is no need to have supersymemtric particles to save the Higgs from going unstable.(Hence, another reason why I think that Supersymemtry is dead!)

The main evidence for inflation are the following:

(1) Horizon problem: the universe that we see is extremely similar even though there was no way for parts of the universe that we see today to have been in contact...unless the universe we see today came ultimately from a small region that had been in contact, and then got inflated to a much larger size.

(2) Flatness problem: Overall, there is virtually no curvature (k) in the universe. In a matter dominated universe, the overall curvature grows with time, so this means that the curvature when the temperature was >TeV must have been extremely small. The smallness of the curvature can be explained if our universe went through a period of inflation before the temperature dropped to ~TeV.

(3) Small, Adiabatic, Gaussian fluctuations in curvature: While the overall curvature of the universe is small or zero, there are local fluctuations in the curvature that end up canceling overall, i.e. while the curvature might be negative in some locations, it is positive in other locations so that the overall curvature is near zero. (Imagine living on a fixed point in Sahara desert and imagine that the farthest that you can see is 100 km in any direction. You would see local variations in the curvature, due to dune piles scattered throughout the desert. However, the overall curvature would be virtually zero. In other words, you would be forced to conclude that, if you did live on the surface of a sphere, then the radius of the sphere is much, much greater than 100 km.) In this analogy, the dune piles are the remnants of the initial fluctuations before inflation. The primordial fluctuations are random, uncorrelated, adiabatic and overall add-up to near-zero curvature.

These are the three main sources of evidence for inflation.

What's interesting is that the standard model of physics provides a natural candidate for the inflation field: the Higgs field. Like the inflation field, the Higgs field is a scalar field, in the sense that at every point in 4-D space-time, there is a value (a quaternion) for the Higgs field. Examples of day-to-day scalar fields are pressure/temperature/electric potential. These should be contrasted with a vector field, which has at every point in space-time both a magnitude and a direction. Examples of day-to-day vector fields are electric field, magnetic field, and wind velocity. At every point, the vector field has both a magnitude and a direction.

The Higgs field is a fundamental scalar field. Scientists are still working out the shape of the Higgs energy density as a function of the value of the Higgs fields. We /they know that the shape of the energy density vs. field value when the value of the field is close to the value of the field (v = 246 GeV) which is a local (and perhaps global) minimum in the energy density. (see inset in the figure below from the paper by Bezrukov and Shaposhnikov.) What is still uncertain is the shape of the curve far from v = 246 GeV. It is well known that the coupling of the Higgs field to particles (such as the top quark) causes the shape of the curve to look somewhat like the curve below. Depending on the coupling to the top quark and the self-coupling of the Higgs field, the potential can look like the one below, which goes to an asymptote. The potential below is also nearly exactly the shape of the potential needed to explain slow-roll inflation. It is called slow-roll because of the small gradient in the field at large values of the Higgs field.

Figure#1: Higgs energy density versus field value from the paper by Bezrukov and Shaposhnikov

titled " The Standard Model Higgs boson as a the inflaton."

Interestingly, the shape of the Higgs field in the case listed above is nearly the same as the primordial potential field that is measured experimentally. (By experimentally, I mean when combining data from WMAP/Planck/ACT/SPT/WeakLensing/LymanAlphaForest.) The primordial potential field was estimated by Hunt and Sarkar using the data list above and can be seen below:

Figure#2: Primordial potential of the inflaton field as estimated by Hunt and Sarkar

Below, I show the data from Hunt and Sarkar, but with the Higgs potential field imposed on top.

Figure#3: Primordial potential of the inflaton field as estimated by Hunt and Sarkar

with the Higgs field estimated by Bezrukov and Shaposhnikov imposed on top as a

**black**line. (In the case that the Higgs self-coupling and the coupling between the Higgs field and the top quark produce an asymptotic value for the potential field.)What one can see in Figure#3 is that the experimental data for the primordial inflaton field can be well described so far by a Higgs-like scalar field. Also interesting is that the potential field takes a nose-dive at smaller and smaller scales. This nose-dive in power help to explain why there appears to be a lack of fluctuations at the scale of galaxies (i.e. 1-100 kpc.) (Perhaps, we don't need warm dark matter as a complete explanation for the lack of fluctuations at small scale???)

Because the shape and size of the Higgs scalar field is so nearly the same as the shape and size of the inflaton field required to create primordial fluctuation, it is my opinion the Higgs field is the inflaton field. To prove this conjecture will require further research by theorists into the exact shape of the Higgs field and further experimental data on the mass of the top quark and Higgs boson as well as experimental data on the primordial fluctuations at both large and small scales (the middle scale seems pretty much nailed down.) Theorists will need to calculate the running of the Higgs field at large energy scales by including more than 5 loops of self-coupling and coupling to bosons/fermions.

All-in-all, it appears that the Higgs field is the inflaton field and there is no need to have supersymemtric particles to save the Higgs from going unstable.(Hence, another reason why I think that Supersymemtry is dead!)

## Friday, February 12, 2016

### Gravity Waves from Inflation???

Congrats to the LIGO team for the (likely) detection of gravity waves. It looks like they will be presenting us with many more candidates in the near future. I look forward to see the results.

But that leaves us with the question: can we detect the tensor gravity waves from inflation?

The problem right now is that the data of B-polarized modes from the CMB has some large error bars and/or contamination from non-primordial sources.

For example, below is a plot of those experiments that have released data on the auto-correlation and/or self-cross-correlation between frequencies.

Yellow = Planck 2015 low l

Light Blue = WMAP 9year

Light Green = Planck 2015 Mid-range l with foreground removed (see prior blog post)

Light Grey = SPTPol 100 day 95GHz x 150GHz

Orange = BICEP2 x KECK 95GHz x 150GHz

Black = Polar Bear1

Brown = Output from CAMB using BestFit Planck2015 TT+TE+EE+lowP+lensing+ext plus r =1

Blue = Output from CAMB using BestFit Planck2015 TT+TE+EE+lowP+lensing+ext plus r =0.1

Grey = Output from CAMB using BestFit Planck2015 TT+TE+EE+lowP+lensing+ext plus r =0.0

where r = initial tensor-to-scalar ratio (which is an input for CAMB)

I think that we can safely say that r = 1 is ruled out. But, from this graph alone, it's hard to rule out r=0.1 or r = 0. So, below is a ZoomIn around the region in which BICEP2/KECK are focused. This time, I plot CAMB output for r = 0.1, r = 0.01, and r = 0. The data points are almost always above the lines, which means that there is likely contamination from foreground or sources of B-modes other than primordial + lensing of E-polarized modes. Also note in the graph below that, if there is no lower error bar, then this means that the error bar goes into negative values. (This didn't happen in the Linear scale plot above.)

Note that the CAMB Output has no foreground added to it. The Planck Mid Range data supposedly has dust removed, but it seems to suffer from erroneous data near l = 225. The BICEP2/KECK data has some dust contamination because there is some dust contamination even at 95 GHz.

All of this just means that we'll have to wait a little bit longer before we can say definitively that we have detected primordial gravitational waves. We should be getting results some time this year from BICEP3+KECK at 95GHz. If this data plus the BICEP2/KECK data at 150GHz is cross-correlated with a B-lensing map and a foreground map to remove lensing+foreground, then one should be able to make some meaningful constraints on the tensor-to-scalar ratio, r. Also, the Planck collaboration is expected to make another release of their data this year (and a final release in 2017.) This data should have BB modes vs. l (for all l), which is something that they have not yet published.

I'm excited to see the results when they are announced.

In the somewhat near future, we should be expecting results from PolarBear2, SPT-3G, and CLASS. This is an exciting time for studying gravity waves produced during inflation.

But that leaves us with the question: can we detect the tensor gravity waves from inflation?

The problem right now is that the data of B-polarized modes from the CMB has some large error bars and/or contamination from non-primordial sources.

For example, below is a plot of those experiments that have released data on the auto-correlation and/or self-cross-correlation between frequencies.

Yellow = Planck 2015 low l

Light Blue = WMAP 9year

Light Green = Planck 2015 Mid-range l with foreground removed (see prior blog post)

Light Grey = SPTPol 100 day 95GHz x 150GHz

Orange = BICEP2 x KECK 95GHz x 150GHz

Black = Polar Bear1

Brown = Output from CAMB using BestFit Planck2015 TT+TE+EE+lowP+lensing+ext plus r =1

Blue = Output from CAMB using BestFit Planck2015 TT+TE+EE+lowP+lensing+ext plus r =0.1

Grey = Output from CAMB using BestFit Planck2015 TT+TE+EE+lowP+lensing+ext plus r =0.0

where r = initial tensor-to-scalar ratio (which is an input for CAMB)

I think that we can safely say that r = 1 is ruled out. But, from this graph alone, it's hard to rule out r=0.1 or r = 0. So, below is a ZoomIn around the region in which BICEP2/KECK are focused. This time, I plot CAMB output for r = 0.1, r = 0.01, and r = 0. The data points are almost always above the lines, which means that there is likely contamination from foreground or sources of B-modes other than primordial + lensing of E-polarized modes. Also note in the graph below that, if there is no lower error bar, then this means that the error bar goes into negative values. (This didn't happen in the Linear scale plot above.)

Note that the CAMB Output has no foreground added to it. The Planck Mid Range data supposedly has dust removed, but it seems to suffer from erroneous data near l = 225. The BICEP2/KECK data has some dust contamination because there is some dust contamination even at 95 GHz.

All of this just means that we'll have to wait a little bit longer before we can say definitively that we have detected primordial gravitational waves. We should be getting results some time this year from BICEP3+KECK at 95GHz. If this data plus the BICEP2/KECK data at 150GHz is cross-correlated with a B-lensing map and a foreground map to remove lensing+foreground, then one should be able to make some meaningful constraints on the tensor-to-scalar ratio, r. Also, the Planck collaboration is expected to make another release of their data this year (and a final release in 2017.) This data should have BB modes vs. l (for all l), which is something that they have not yet published.

I'm excited to see the results when they are announced.

In the somewhat near future, we should be expecting results from PolarBear2, SPT-3G, and CLASS. This is an exciting time for studying gravity waves produced during inflation.

## Friday, February 5, 2016

### Quick Update on BB Modes in CMB

This is just a quick update.

I found an arxiv manuscript by a researcher who was one of the many co-authors on that joint BICEP/Planck paper last year. (H.U. Nørgaard - Nielsen)

He has taken the Planck 2015 Polarization maps (U&Q) and he has tried to remove the dust foreground in order to obtain EE and BB power spectra. Note that the Planck Collaboration has yet to publish yet official BB power spectra (except between l =2 and l= 30.)

The plot below from his manuscript is a plot of his determination of the CMB's BB Power spectra (red) and models for the lensing B-modes plus primordial B-modes (blue) as a function of the tensor-to-scalar ratio (r =T/S.)

What we can see is (a) there is large error in the data, which is due to the low sensitivity of Planck to BB modes, and (b) there is likely a spike around l = 225, which corresponds to the location of the first peak in the TT data. As such, it would be interesting to see what H.U. Nørgaard - Nielsen obtains for the correlation spectra for TB. (The TB power spectra is not in the manuscript.)

(c) There's no way (using only these results) to make a meaningful constrain on the tensor-to-scalar ratio, or even the lensing B-modes.

As such, we will have to wait for BICEP3 and CLASS results before we can make any real constraints on the tensor-to-scalar ratio. (Luckily, BICEP3 and CLASS will be able to improve their constraints on r =T/S by using Planck estimates for dust and for lensing B-modes.)

I found an arxiv manuscript by a researcher who was one of the many co-authors on that joint BICEP/Planck paper last year. (H.U. Nørgaard - Nielsen)

He has taken the Planck 2015 Polarization maps (U&Q) and he has tried to remove the dust foreground in order to obtain EE and BB power spectra. Note that the Planck Collaboration has yet to publish yet official BB power spectra (except between l =2 and l= 30.)

The plot below from his manuscript is a plot of his determination of the CMB's BB Power spectra (red) and models for the lensing B-modes plus primordial B-modes (blue) as a function of the tensor-to-scalar ratio (r =T/S.)

What we can see is (a) there is large error in the data, which is due to the low sensitivity of Planck to BB modes, and (b) there is likely a spike around l = 225, which corresponds to the location of the first peak in the TT data. As such, it would be interesting to see what H.U. Nørgaard - Nielsen obtains for the correlation spectra for TB. (The TB power spectra is not in the manuscript.)

(c) There's no way (using only these results) to make a meaningful constrain on the tensor-to-scalar ratio, or even the lensing B-modes.

As such, we will have to wait for BICEP3 and CLASS results before we can make any real constraints on the tensor-to-scalar ratio. (Luckily, BICEP3 and CLASS will be able to improve their constraints on r =T/S by using Planck estimates for dust and for lensing B-modes.)

## Tuesday, December 8, 2015

### Updates on Warm Dark Matter & What's Up with the Lyman Alpha Forest

A couple of quick updates followed by a discussion of the most recent M. Viel paper regarding Lyman Alpha Forest and limits on Warm Dark Matter.

Update#1: Jeltema and Profumo find no evidence for a 3.5 keV line in the Draco spheroidal galaxy. This is yet again another paper that finds no evidence for this 3.5 keV line when looking at dwarf galaxies. The source of this 3.5 keV line may be unique to elliptical galaxies and likely has nothing to do with dark matter.

Update#2: (Related to LHC) Both the Atlas and CMS detectors recently published data from RunII at 13 TeV and report no evidence for resonances (that would produce di-jets) with mass energies up to ~6 TeV. In other words, cold dark matter and supersymmetry are running out of places to hide.

Update#3: Baur

The pre-print argues that "Using an unprecedentedly large sample of medium resolution QSO spectra from the ninth data release of SDSS, along with a state-of-the-art set of hydrodynamical simulations to model the Lyman-alpha forest in the non-linear regime, we issue the tightest bounds to date on pure dark matter particles: mX > 4.35 keV (95% CL) for early decoupled thermal relics such as a hypothetical gravitino, and its corresponding bound for a non-resonantly produced right-handed neutrino ms > 31.7 keV (95% CL)."

However, I think that we need to be very skeptical of this work. Here are some of my issues with the manuscript as well as many of the other Lyman Alpha Forest manuscripts out there.

Update#1: Jeltema and Profumo find no evidence for a 3.5 keV line in the Draco spheroidal galaxy. This is yet again another paper that finds no evidence for this 3.5 keV line when looking at dwarf galaxies. The source of this 3.5 keV line may be unique to elliptical galaxies and likely has nothing to do with dark matter.

Update#2: (Related to LHC) Both the Atlas and CMS detectors recently published data from RunII at 13 TeV and report no evidence for resonances (that would produce di-jets) with mass energies up to ~6 TeV. In other words, cold dark matter and supersymmetry are running out of places to hide.

Update#3: Baur

*et al*. (including M. Viel) put a pre-print of a paper submitted to JCAP titled "Lyman Alpha Forests cool Warm Dark Matter."The pre-print argues that "Using an unprecedentedly large sample of medium resolution QSO spectra from the ninth data release of SDSS, along with a state-of-the-art set of hydrodynamical simulations to model the Lyman-alpha forest in the non-linear regime, we issue the tightest bounds to date on pure dark matter particles: mX > 4.35 keV (95% CL) for early decoupled thermal relics such as a hypothetical gravitino, and its corresponding bound for a non-resonantly produced right-handed neutrino ms > 31.7 keV (95% CL)."

However, I think that we need to be very skeptical of this work. Here are some of my issues with the manuscript as well as many of the other Lyman Alpha Forest manuscripts out there.

## Monday, November 2, 2015

### Update on BB modes from BICEP/KECK: Dust is looking more likely

Just today the BICEP/KECK group just placed a new manuscript on arxiv with their latest data on BB modes. Included in the new paper is data collected by the KECK telescope at 95 GHz. (The new data is in red in the figure below.)

The gist of the paper is that the new data at 95 GHz finds only very slight (if any) evidence for BB-modes from tensor-modes (i.e. inflationary-based gravitational waves.) Right now, there's not enough data to rule out dust. The error bars are just too large to say anything conclusively. The point here is that more data at 95 GHz (where there is less signal from dust than at 150 GHz) does

The gist of the paper is that the new data at 95 GHz finds only very slight (if any) evidence for BB-modes from tensor-modes (i.e. inflationary-based gravitational waves.) Right now, there's not enough data to rule out dust. The error bars are just too large to say anything conclusively. The point here is that more data at 95 GHz (where there is less signal from dust than at 150 GHz) does

__not__help to confirm the tensor-mode nature of the signal found at 150 GHz. (Note that this does not help or hurt inflation theories in general; it only helps to rule out a few of the theories that predict large values of r. In my opinion, slow-roll inflation with the Higgs field as the slow-roll scalar field for inflation is a pretty strong theory. The question is: what is the scale of inflation? And this new data seems to suggest that the energy scale for inflation is low enough that it will be difficult to experimentally measure tensor-mode gravitational waves over the next decade.)## Friday, October 30, 2015

### Recent Updates and Some Recent Silly Headlines

It's been a few months. So, here are some updates:

(1) Dark matter did not kill off the dinosaurs. This is just silly. Dark matter is likely warm, and not cold, and hence doesn't clump. So, it can't be blamed for killing off the dinosaurs.

(2) China can't sustain 7%/yr growth rates when its interest rates are only 4.35%. Second, if you lower interest rates, you don't promote growth rates. Think about it. How can you improve the growth rate on your investments when you are willing to accept low rates of return from your portfolio?

(3) China's "social credit score" is creepy and downright authoritarian. Don't protest against the government or else they will lower your "social credit score." Be a good citizen and don't mind the fact that you can't vote us out of office!

(4) I no longer believe that dark energy can be explained by decaying dark matter. Instead, dark energy is likely just the vacuum energy density of the Higgs field. The expectation energy value, φ, of the Higgs field is 246 GeV, but we don't know what is the vacuum energy density V(φ).

V(φ) could be pretty much any value.

As proposed by Mikhail Shaposhnikov, the Higgs field could be both the source of inflation and dark energy...depending on the value of known particles, such as the top quark, W boson, and Higgs boson.

(4) The case for Warm Dark Matter is even stronger. A recent paper by Garzilli and Boyarsky shows that the Lyman Alpha forest data (previously used by Viel et al. 2013 to rule out certain warm dark matter candidates) is actually pointing towards a rest mass for dark matter of ~3 keV. (Though, values greater than 3 keV are still within 1 sigma uncertainty. So, this is not an actual detection.) The point here is that Viel et al. 2013 likely did not model the temperature of the Intergalatic Medium (IGM) correctly, and likely overestimated the temperature of the IGM at large z (especially given that the value of the reionization optical depth and the z for reionization dropped significantly in the 2015 Planck results, and will likely decrease even further in future Planck data releases. A likely value for the optical depth is 0.06 rather than 0.078 quotes back in 2013 or 0.068 in 2015.) To be somewhat fair to Viel et al, they did point out in Figure 10 of their paper that the temperature at high z comes out low if they allow the temperature to flout at high z, but for their presented results, they did not allow the temperature to drop below 5000K at z=5.4.

The results from Garzilli and Boyarsky are summarized below.

(1) Dark matter did not kill off the dinosaurs. This is just silly. Dark matter is likely warm, and not cold, and hence doesn't clump. So, it can't be blamed for killing off the dinosaurs.

(2) China can't sustain 7%/yr growth rates when its interest rates are only 4.35%. Second, if you lower interest rates, you don't promote growth rates. Think about it. How can you improve the growth rate on your investments when you are willing to accept low rates of return from your portfolio?

(3) China's "social credit score" is creepy and downright authoritarian. Don't protest against the government or else they will lower your "social credit score." Be a good citizen and don't mind the fact that you can't vote us out of office!

(4) I no longer believe that dark energy can be explained by decaying dark matter. Instead, dark energy is likely just the vacuum energy density of the Higgs field. The expectation energy value, φ, of the Higgs field is 246 GeV, but we don't know what is the vacuum energy density V(φ).

V(φ) could be pretty much any value.

As proposed by Mikhail Shaposhnikov, the Higgs field could be both the source of inflation and dark energy...depending on the value of known particles, such as the top quark, W boson, and Higgs boson.

(4) The case for Warm Dark Matter is even stronger. A recent paper by Garzilli and Boyarsky shows that the Lyman Alpha forest data (previously used by Viel et al. 2013 to rule out certain warm dark matter candidates) is actually pointing towards a rest mass for dark matter of ~3 keV. (Though, values greater than 3 keV are still within 1 sigma uncertainty. So, this is not an actual detection.) The point here is that Viel et al. 2013 likely did not model the temperature of the Intergalatic Medium (IGM) correctly, and likely overestimated the temperature of the IGM at large z (especially given that the value of the reionization optical depth and the z for reionization dropped significantly in the 2015 Planck results, and will likely decrease even further in future Planck data releases. A likely value for the optical depth is 0.06 rather than 0.078 quotes back in 2013 or 0.068 in 2015.) To be somewhat fair to Viel et al, they did point out in Figure 10 of their paper that the temperature at high z comes out low if they allow the temperature to flout at high z, but for their presented results, they did not allow the temperature to drop below 5000K at z=5.4.

The results from Garzilli and Boyarsky are summarized below.

The figure above shows that a value of 3 keV appears to be consistently the best option at all four values of z. The temperature of the IGM appears to increase from <5000 10="" at="" greater="" than="" to="" z="4.2.</p">
The figure above shows the flux power spectrum of the Lyman Alpha Forest. Note the cut off in power spectra at a wavenumber of ~0.06 s/km. Viel et al. 2013 has previously understood the cut off to be due to a high temperature of the IGM. However, according to Garzilli and Boyarsky 2015, these results can be better described by a 3 keV dark matter particle and a lower temperature IGM. Note also that a 3 keV thermal relic dark matter particle is quite similar to a 7 keV sterile neutrino made with a lepton asymmetry of 0.000001. (See figure 5 of the Garzilli and Boyarsky manuscript.)

## Friday, July 17, 2015

### Comparison of the Wealth of Nations: The 2015 Update

Yup, it’s that time of year again. A few weeks ago, BP released their latest updates for the production and consumption of energy throughout the world. (These links go to the previous posts on this subject made in 2011, in 2012, in 2013, and in 2014 on the Wealth of Nations.) One thing to note is that, this year, I've added five countries to the list of countries analyzed: Saudi Arabia, South Korea, Indonesia, Mexico, and Italy. This now means that all 15 countries with the largest GDP PPP (as of 2015) are included in this analysis. (The next 3 largest countries would be Iran, Turkey, and Spain, which I would like to add next year to the analysis.)

Here are some conclusions before I get into a detailed breakdown of the analysis for this year.

(1) The US economy (as measured in [TW-hrs] of useful electrical and mechanical work produced) increased by 1.0% in 2014.

(2) Many countries had negative growth rates in 2014: Mexico (-1.5%), Canada (-1.8%), Germany (-2.3%), France (-2.7%), Japan (-3.0%), Italy (-4.0%) and the UK (-4.8%). This was a particularly bad year for the UK as far as generating useful work. There were two countries with near-zero growth rates: South Korea (0.0%) and Russia (0.5%.) The major countries with the highest growth rates were: China (3.3%), Indonesia (3.6%), Brazil (3.7%), Saudi Arabia (7.4%), and India (7.8%.) This was a particularly good year for India and Saudi Arabia, which is one reason that Saudi Arabia is now on my list of countries to analyze. (Another reason is that it's economy is very different than other countries I analyze, so it'll be good to see how their useful work generation compares with their GDP PPP.)

(3) The purchasing power parity GDP (i.e. PPP GDP) remains a pretty good reflection of a country's capability to do mechanical and electrical work. However, there are still some countries who generate a lot more useful work [TW-hr] than they are given credit in their GDP PPP. These countries are: Canada, South Korea, and Saudi Arabia. However, there are a number of countries whose GDP PPP is significantly higher than their generation of useful work: Indonesia, Italy, UK, Germany, and India. Of these, Indonesia has the widest gap between their GDP PPP and their generation of electrical+mechanical work. It's not clear to me why there is such a difference between Canada and Indonesia. As I'v mentioned before, if I were a Canadian representative for the IMF, I would voice my concern that the IMF is underestimating the size of the Canadian economy by at least two fold.

So, now I'm going to present a more detailed breakdown of the analysis and present the data in graphical form.

## Sunday, July 5, 2015

### Updates on 3.55 keV line

Update 8/15/2017

I just wrote a new post. I still think that dark matter is sterile neutrinos. However, I'm leaning now towards their masses being on the order of 10^10 GeV (ten to the ten GeV), i.e.

For more information on the predictions of Type I Seesaw models, see this Jan 2017 arxiv manuscript by Stephen F. King. This model appears to be quite predictive as far as being able to predict the angles for neutrino mixing as well as the CP violation term, delta. The CP violation term is very nearly equal to -90 degrees, which means near maximal CP violation.

Well, it looks like the emission peak around 3.55 keV may not actually be from a decaying sterile neutrino. In a previous post, I wrote about the news of a possible sterile neutrino with a rest mass of 7.1 keV. However, recent experimental data does not appear to back up this claim.

For example, see the following papers:

Where do the 3.5 keV photons come from? A morphological study of the Galactic Center and of Perseus

Carlson et al. Jan 2015

Constraints on 3.55 keV line emission from stacked observations of dwarf spheroidal

galaxies

Malyshev et al. Aug 2014

I just wrote a new post. I still think that dark matter is sterile neutrinos. However, I'm leaning now towards their masses being on the order of 10^10 GeV (ten to the ten GeV), i.e.

For more information on the predictions of Type I Seesaw models, see this Jan 2017 arxiv manuscript by Stephen F. King. This model appears to be quite predictive as far as being able to predict the angles for neutrino mixing as well as the CP violation term, delta. The CP violation term is very nearly equal to -90 degrees, which means near maximal CP violation.

__Original Post__Well, it looks like the emission peak around 3.55 keV may not actually be from a decaying sterile neutrino. In a previous post, I wrote about the news of a possible sterile neutrino with a rest mass of 7.1 keV. However, recent experimental data does not appear to back up this claim.

For example, see the following papers:

Where do the 3.5 keV photons come from? A morphological study of the Galactic Center and of Perseus

Carlson et al. Jan 2015

Constraints on 3.55 keV line emission from stacked observations of dwarf spheroidal

galaxies

Malyshev et al. Aug 2014

Discovery of a 3.5 keV line in the Galactic Centre and a critical look at the origin of the line across astronomical targets

by Jeltema and Profumo Aug 2014

The main evidence (as it seems to me) against the claim for the 7.1 keV particle with a sin squared 2 theta of ~10^-10 is that there is essentially no signal from dwarf spheroidal galaxies.

But all is not lost for sterile neutrinos as warm dark matter. It is still possible to have a sterile neutrino with a smaller value of sin squared 2 theta. A smaller value of theta is possible if there is a larger lepton asymmetry in the universe. The constraints on lepton asymmetry in the universe is extremely weak because N_eff is very loosely constrained. Lepton asymemtry of +/- 0.1 is still entirely possible, whereas only a value of 0.001 would be needed to stay below the constraints set by X-rays emission from dwarf spheroidal galaxies.

As such, it's possible that the sterile neutrino could have a smaller value of theta and still be consistent with cosmological constrains on N_eff.

Finally, I'd like to point out that most of the papers that analyze X-ray emission in order to put constraints on dark matter are pretty ad hoc (the same applies to papers using Gamma-ray emission to detect Cold Dark Matter.) There is dark matter all over the place. Everywhere you look, there's dark matter. It's fairly evenly distributed through the universe. True, it's slightly lumpy here and here, but there would be X-ray signal from every direction (with some redshift depending on how far away the source is.)

As such, researchers in this area needs to be a much systematic about searching for X-ray (or gamma-ray) emissions from possible dark matter. This means doing a correlation analysis against dark matter lensing maps (after subtracting off known sources of X-rays or gamma-rays.)

(Such as detailed by Fornengo and Regis or Zandanel et al. on how to do these correlations, though not actually done by them.)

This is not easy to do because you need to know the density of dark matter as function of the distance from us across the whole sky. But we know have maps of dark matter density vs. z and we have X-ray emissions as a function of energy. It should be possible to do a full-sky analysis of decaying dark matter (rather than just this silly looking at few galaxies with known, unknown sources of X-rays.)

This means that, in order to claim a 'detection' of a decaying dark matter particle, researchers need to match their signal with actual data on the 3D density of dark matter (i.e. two spatial dimension plus distance, z.)

If anybody is aware of such a detailed, full-sky correlation analysis between dark matter and X-ray emission, please provide me a link in the comment section below.

Thanks,

Eddie

by Jeltema and Profumo Aug 2014

The main evidence (as it seems to me) against the claim for the 7.1 keV particle with a sin squared 2 theta of ~10^-10 is that there is essentially no signal from dwarf spheroidal galaxies.

But all is not lost for sterile neutrinos as warm dark matter. It is still possible to have a sterile neutrino with a smaller value of sin squared 2 theta. A smaller value of theta is possible if there is a larger lepton asymmetry in the universe. The constraints on lepton asymmetry in the universe is extremely weak because N_eff is very loosely constrained. Lepton asymemtry of +/- 0.1 is still entirely possible, whereas only a value of 0.001 would be needed to stay below the constraints set by X-rays emission from dwarf spheroidal galaxies.

As such, it's possible that the sterile neutrino could have a smaller value of theta and still be consistent with cosmological constrains on N_eff.

Finally, I'd like to point out that most of the papers that analyze X-ray emission in order to put constraints on dark matter are pretty ad hoc (the same applies to papers using Gamma-ray emission to detect Cold Dark Matter.) There is dark matter all over the place. Everywhere you look, there's dark matter. It's fairly evenly distributed through the universe. True, it's slightly lumpy here and here, but there would be X-ray signal from every direction (with some redshift depending on how far away the source is.)

As such, researchers in this area needs to be a much systematic about searching for X-ray (or gamma-ray) emissions from possible dark matter. This means doing a correlation analysis against dark matter lensing maps (after subtracting off known sources of X-rays or gamma-rays.)

(Such as detailed by Fornengo and Regis or Zandanel et al. on how to do these correlations, though not actually done by them.)

This is not easy to do because you need to know the density of dark matter as function of the distance from us across the whole sky. But we know have maps of dark matter density vs. z and we have X-ray emissions as a function of energy. It should be possible to do a full-sky analysis of decaying dark matter (rather than just this silly looking at few galaxies with known, unknown sources of X-rays.)

This means that, in order to claim a 'detection' of a decaying dark matter particle, researchers need to match their signal with actual data on the 3D density of dark matter (i.e. two spatial dimension plus distance, z.)

If anybody is aware of such a detailed, full-sky correlation analysis between dark matter and X-ray emission, please provide me a link in the comment section below.

Thanks,

Eddie

## Saturday, July 4, 2015

### A New Cosmological Simulation that models Warm Dark Matter as Quantum Degenerate Fermions & Calcs to Estimate Dark Matter Mass and Temperature

(Updated: 7/14/2015)

In previous comment section of a post, I had written that I had not seen an article that conducted a Cosmological Simulation of Structure formation when including the quantum degenerate nature of fermionic, warm dark matter.

I'm happy to say that an article has recently been submitted to arxiv that does just this.

"Structure formation in warm dark matter cosmologiesTop-Bottom Upside-Down"

by Sinziana Paduroiu, Yves Revaz, and Daniel Pfenniger

Basically, the gist of the paper is the following: everybody who has modeled warm dark matter previously has had to make assumptions that (a) are not valid, and (b) simplify the simulations too much. As such, Paduroiu

They have posted a YouTube site with videos of their simulations.

One problem with the manuscript is that there is no comparison to actual data.

One thing that I want to add to the discussion (which I've mentioned previously) is that non-thermal warm dark matter does not act exactly the same as a thermal warm dark matter, but with a rest mass lower than the rest mass of the actual rest mass. Too many times, proponents of Cold Dark Matter make simplification of how warm dark matter interacts (such as assuming that non-thermal WDM acts like a thermal WDM particle of lower rest mass), and then show that the rest mass of the particle that explains the "small-scale crisis" is in compatible with the rest mass required to explain the Lyman-alpha forest data. And therefore this CDM proponents claim that WDM fails, and therefore CDM is still king.

The logic here is silly. In particular, I'll thinking of the paper by Schneider et al. "Warm dark matter does not do better than cold darkmatter in solving small-scale inconsistencies."

The logic in this paper is absurd.

(1) We acknowledge that CDM has a small scale crisis

(2) So we poorly model WDM, and find that it can't solve both the small-scale crisis and fit with data from Lyman-Alpha Forest

(3) "Hence, from an astrophysical perspective, there is no convincing reason to favour WDM from thermal or thermal-like production (i.e. neutrinos oscillations) over the standard CDM scenario."

Let's not ignore the small-scale crisis (as was done by all but one of the spearkers that the recent CMB@50 event at Princeton a few weeks ago.) There is a real problem with CDM, and it won't be solved by Self-interacting Cold Dark matter. The plots below from the Schneider et al. paper and from the Weinberg et al. paper are two ways of visualizing that there is a small-scale crisis for CDM.

Sterile neutrinos generated by the Shi-Fuller mechanism often have distribution function such that they are effectively colder than the surrounding photons/active-neutrinos by a factor of 0.6.

The main point here is that a non-thermal sterile neutrino can explain both the Lyman Alpha forest and the small-scale crisis, the missing satellite, as well as the Lyman-Alpha forest. What we are looking for is a non-thermal particle with a rest mass ~10-20 keV.

In previous comment section of a post, I had written that I had not seen an article that conducted a Cosmological Simulation of Structure formation when including the quantum degenerate nature of fermionic, warm dark matter.

I'm happy to say that an article has recently been submitted to arxiv that does just this.

"Structure formation in warm dark matter cosmologiesTop-Bottom Upside-Down"

by Sinziana Paduroiu, Yves Revaz, and Daniel Pfenniger

Basically, the gist of the paper is the following: everybody who has modeled warm dark matter previously has had to make assumptions that (a) are not valid, and (b) simplify the simulations too much. As such, Paduroiu

*et al*. argue that modeling warm dark matter is extremely complicated, and nobody is doing it correctly.They have posted a YouTube site with videos of their simulations.

One problem with the manuscript is that there is no comparison to actual data.

One thing that I want to add to the discussion (which I've mentioned previously) is that non-thermal warm dark matter does not act exactly the same as a thermal warm dark matter, but with a rest mass lower than the rest mass of the actual rest mass. Too many times, proponents of Cold Dark Matter make simplification of how warm dark matter interacts (such as assuming that non-thermal WDM acts like a thermal WDM particle of lower rest mass), and then show that the rest mass of the particle that explains the "small-scale crisis" is in compatible with the rest mass required to explain the Lyman-alpha forest data. And therefore this CDM proponents claim that WDM fails, and therefore CDM is still king.

The logic here is silly. In particular, I'll thinking of the paper by Schneider et al. "Warm dark matter does not do better than cold darkmatter in solving small-scale inconsistencies."

The logic in this paper is absurd.

(1) We acknowledge that CDM has a small scale crisis

(2) So we poorly model WDM, and find that it can't solve both the small-scale crisis and fit with data from Lyman-Alpha Forest

(3) "Hence, from an astrophysical perspective, there is no convincing reason to favour WDM from thermal or thermal-like production (i.e. neutrinos oscillations) over the standard CDM scenario."

Let's not ignore the small-scale crisis (as was done by all but one of the spearkers that the recent CMB@50 event at Princeton a few weeks ago.) There is a real problem with CDM, and it won't be solved by Self-interacting Cold Dark matter. The plots below from the Schneider et al. paper and from the Weinberg et al. paper are two ways of visualizing that there is a small-scale crisis for CDM.

There's also the too high reionization optical depth problem and the core/cusp problem of cold dark matter. There is no "Dark Matter" crisis. It's just a question of what is the rest mass and thermal distribution of dark matter.

Basically, the small-scale crisis is best solved by having a thermal rest mass of ~2 keV and the Lyman-Alpha forest is best fit with a thermal particle with rest mass of 32 keV. (Though, it should be noted that this best fit was done before the Planck 2015 data release. It will be interesting to see how this "best-fit" changes with updated data from Planck, as well as updated BAO, SZ, & lensing data.)

Because it's the velocity (i.e. v = (4/11)^(1/3) 3kTo/mc * (1 + z) ≈ 151 (1 + z) / m (in eV)

km/s ) that's important for the Lyman-Alpha Forest, this means that the Lyman-Alpha Forest can also be equally explained by a 16 keV rest mass particle that is born with a thermal energy one half as large as a 32 keV thermally-born particle. Or a 8 keV rest mass particle that is born with a thermal energy one fourth as large as a 32 keV thermally-born particle.

km/s ) that's important for the Lyman-Alpha Forest, this means that the Lyman-Alpha Forest can also be equally explained by a 16 keV rest mass particle that is born with a thermal energy one half as large as a 32 keV thermally-born particle. Or a 8 keV rest mass particle that is born with a thermal energy one fourth as large as a 32 keV thermally-born particle.

(For more information on calculating the temperature/velocity of fermi particles vs. z, that transition between
being relativistic to being non-relativistic at time z=z_critical, is given by equations (10)-(15) in Pfenniger and Muccione 2008.)

Because it's the number density times deBroglie wavelength cubed that's important for quantum degeneracy at the small-scale, i.e. lambdaDM * hbar^3 / (m^4*v^3) ~ lambdaDM / (m*T^3), then quantum degeneracy remains the same provided that m*T^3 = constant = (2 keV)*(T_thermal)^3

Note that the fact that a resonant sterile neutrino can be born with a temperature less than the temperature of the photons/neutrinos in the surrounding thermal bath means that the sterile neutrinos become non-relativistic sooner than you would expect, and hence their temperature starts dropping as (1+z)^2 sooner than for a thermal particle. The net effect is that, quantum mechanically, a sterile neutrino born with half of the thermal energy and 8 times the rest mass has the same degeneracy parameter as a thermal sterile neutrino. In other words, the number density times deBroglie wavelength cubed is the same for a 2 keV, thermal sterile neutrino as it is for a 16 keV sterile neutrino born with half of the thermal energy.

And as emntioned above, the Lyman Alpha forest suggests that (T_thermal) / (32 keV) = (T/m)

Solving the two equations, you get the following answers:

Rest Mass = 16 keV

Temperature at birth = 0.5 * Global thermal Temperature at the time of birth

So, this means that a fermion dark matter particle with a rest mass of ~16 keV that is born with half of the thermal energy of the photons and neutrinos (and which can't interact with other particles after decoupling) should be able to explain the small-scale crisis, the missing satellite, and the Lyman-Alpha forest. It should be noted that there are large error bar on the numbers listed above because of the uncertainty of the best fit for Lyman Alpha and uncertainty on the best fit for a quantum particle to explain dwarf galaxies. So, the rest mass of dark matter is likely between 10-20 keV with temperatures of 0.4-0.6 compared with the surrounding media.

Sterile neutrinos generated by the Shi-Fuller mechanism often have distribution function such that they are effectively colder than the surrounding photons/active-neutrinos by a factor of 0.6.

The main point here is that a non-thermal sterile neutrino can explain both the Lyman Alpha forest and the small-scale crisis, the missing satellite, as well as the Lyman-Alpha forest. What we are looking for is a non-thermal particle with a rest mass ~10-20 keV.

## Sunday, June 7, 2015

### Constraints on Dark Matter & Dark Energy from the Hubble Expansion between z=0 and z~3

(Note: Updated on 6/19 to include Planck 2015 Best Fit for comparison)

Using data for the Hubble expansion as a function of time, this week I'm showing how data just on the Hubble Expansion between z=0 and z~3 alone can provide some tight constants on the amount of dark energy and dark matter in the universe.

First, I'd like to present the experimental data (without any fits to the data), and then I'll present the experimental data shown along with a "best fit equation" and with Planck's recent estimates. The z=0 to z=1.3 data shown below was found in Heavens

The figure above is a plot of the expansion of the Universe, H(z), as a function of time in the past. Here, I've plotted on the y-axis the Hubble expansion normalized by the Hubble Expansion Constant Today, and then squared. The x-axis is the inverse scale of the universe. A value of 4 means that linear dimensions in the universe would have been 4 times smaller. Also note that this is a log-log plot so that the data points near (1,1) are not scrunched together. One thing to note about the data is that there is a definite change in slope between the data near z=0 and the data at z>1.

Next, we'll discuss the theory behind why the Hubble expansion rate changes with time. We'll focus here on the case in which the total mass in the universe is equal to the critical density (Ω=1). In that case, we can use Equation 2.18 of the Physical Cosmology Class Notes by Tom Theuns to determine how the Hubble expansion rate changes with time. This equation is listed below:

As seen above, if the only form of energy density in the universe were dark energy (Ω

If we were in a universe with only radiation, then the Hubble constant would decrease as (a

Next, I want to show that the best fit through the experimental data is a universe with approximately 30% matter and 70% dark energy (today.) I wanted to see what would be the best fit through this data (ignoring all other data...which also points to ~30% matter and 70% dark energy.) So, in Excel, I created a quartic polynomial equation with 5 free variables (a+bx+cx

Using data for the Hubble expansion as a function of time, this week I'm showing how data just on the Hubble Expansion between z=0 and z~3 alone can provide some tight constants on the amount of dark energy and dark matter in the universe.

First, I'd like to present the experimental data (without any fits to the data), and then I'll present the experimental data shown along with a "best fit equation" and with Planck's recent estimates. The z=0 to z=1.3 data shown below was found in Heavens

*et al.*2014 (which has references to where the original data was collected.) The data point at z=2.34 is from baryon acoustic oscillations (BAO) found in the Lyman-Alpha forest by the BOSS collaboration (Busca*et al*. 2013).The figure above is a plot of the expansion of the Universe, H(z), as a function of time in the past. Here, I've plotted on the y-axis the Hubble expansion normalized by the Hubble Expansion Constant Today, and then squared. The x-axis is the inverse scale of the universe. A value of 4 means that linear dimensions in the universe would have been 4 times smaller. Also note that this is a log-log plot so that the data points near (1,1) are not scrunched together. One thing to note about the data is that there is a definite change in slope between the data near z=0 and the data at z>1.

Next, we'll discuss the theory behind why the Hubble expansion rate changes with time. We'll focus here on the case in which the total mass in the universe is equal to the critical density (Ω=1). In that case, we can use Equation 2.18 of the Physical Cosmology Class Notes by Tom Theuns to determine how the Hubble expansion rate changes with time. This equation is listed below:

As seen above, if the only form of energy density in the universe were dark energy (Ω

_{Λ}), then the Hubble expansion rate would be a constant. The data above is clearly not consistent with the case of only dark energy (i.e. a cosmological constant.) The other terms in the equations are: curvature (Ω_{k}), matter (Ω_{m}), and radiation (Ω_{r}). Radiation is defined as particles whose kinetic energy is greater than their rest mass energy. Matter is defined as particles whose kinetic energy is much less than their rest mass energy. Curvature is the the curvature of the universe.If we were in a universe with only radiation, then the Hubble constant would decrease as (a

_{0}/a)^{4}, which is the same function as how the radiation energy density decreases as the universe expands. If we were in a universe with only matter, then the Hubble constant would decrease as (a_{0}/a)^{3}, which is the same function as how the matter energy density decreases as the universe expands.Next, I want to show that the best fit through the experimental data is a universe with approximately 30% matter and 70% dark energy (today.) I wanted to see what would be the best fit through this data (ignoring all other data...which also points to ~30% matter and 70% dark energy.) So, in Excel, I created a quartic polynomial equation with 5 free variables (a+bx+cx

^{2}+dx^{3}+ex^{4}), and constrained the 5 free variables to sum to a value of 1 (i.e. to constrain the total mass to be equal to the critical density) and also constrained the free variables to be greater than zero. In this case, the best fit through the data was (0.717, 0, 0, 0.283, 0). These values are pretty close to the values determined using Planck+BAO data. Interestingly, there is no sign of energy which would scale linearly, quadrically, or quarticly with (ao/a). This means that the best fit through the data is a world with only matter and dark energy.## Thursday, May 28, 2015

### Update to Post on Neutrino Mixing: Visualizing CP violation

In this post, I'll be updating a graph I made last year in a post on the PMNS matrix.

The reason for the update is that there was a recent announcement by the T2K research group of a measurement of anti-muon-neutrinos converting into other anti-neutrino species. I'd like to first show a plot from their recent presentation in which they show the uncertainty in both the 2-3 mixing angle and the 2-3 mass difference.

As can be seen in their figure on Slide 48, the data is entirely consistent with the 2-3 mixing angle being the same for neutrinos as anti-neutrinos. This is a good sign that the mixing angle for anti-neutrinos is the same as for neutrinos; but note that this is also compatible with there being a CP violating phase. In fact, the best fit value for T2K data plus Particle Data Group 2014 data yields a value of the CP violating phase that is close to -90 degrees. What's interesting with these new estimates for the 4 parameters of the PMNS matrix is that (a) the summation of the angles is close to zero (within error) and (b) summation of each of thetas is close to 90 degrees (within error.)

T2K 2015 summary

The PMNS corresponding to these values can be visualized using Wolfram Alpha by clicking on the site below:

The 1-sigma uncertainty around the δ

So, using the data above, I've re-evaluated the eigenvalues and determinant of the PMNS maxtrix. I think that the eigenvalues way of viewing this data is better than listing the 4-parameters because one can visualize how close the eigenvalues are to the unit circle. If the eigenvalues fall on the unit circle, then the PMNS matrix is the neutrino mixing is complete (i.e. the mixing is only into these states.) If eigenvalues all fall outside the unit circle, then there is growth in the total number of neutrinos, and if the eigenvalues all fall within the unit circle, then there is decay in the total number of neutrinos.

As seen below, the eigenvalues fall very close to the unit circle (especially if including uncertainty... which is not shown in the figure below...the size of the markers does not correspond to uncertainty in the value of the eigenvalues.) Using previous data, the eigenvalues fall nearly exactly on the unit circle, whereas using the T2K2015+PDG2014 data, the eigenvalues fall slightly off of the unit circle (though, within 1-sigma uncertainty.) Interestingly, one of the eigenvalues is extremely close to 1+0i. The other two eigenvalues are close to the unit circle, but far away from 1+0i. The other thing to point out is that two eigenvalues far from 1+0i are not mirror images of each other on the unit circle. The fact that they are not mirror images is a sign that there is CP violation in the PMNS matrix. If there were no CP violation, one of these eigenvalues would be the complex conjugate of the other one: a +/- bi. The other thing to point is that the value of the determinant is using the new data is nearlly entirely Real valued and slightly less than 1 (Det=0.9505+0.001i). This is likely a sign that the values of the 4-parameters were chosen by T2K in a way that is not consistent, but there is also still the possibility that the non-unitary value of the determinant is due to the fact that there is another type of neutrino that mixes with the three main species. (This is just speculation because, as can be seen from the old wiki data, the eigenvalues fall very close to the unit circle when chosen consistently.)

Old Wiki Data

The reason for the update is that there was a recent announcement by the T2K research group of a measurement of anti-muon-neutrinos converting into other anti-neutrino species. I'd like to first show a plot from their recent presentation in which they show the uncertainty in both the 2-3 mixing angle and the 2-3 mass difference.

As can be seen in their figure on Slide 48, the data is entirely consistent with the 2-3 mixing angle being the same for neutrinos as anti-neutrinos. This is a good sign that the mixing angle for anti-neutrinos is the same as for neutrinos; but note that this is also compatible with there being a CP violating phase. In fact, the best fit value for T2K data plus Particle Data Group 2014 data yields a value of the CP violating phase that is close to -90 degrees. What's interesting with these new estimates for the 4 parameters of the PMNS matrix is that (a) the summation of the angles is close to zero (within error) and (b) summation of each of thetas is close to 90 degrees (within error.)

T2K 2015 summary

Angle | (Radians) | (Degrees) | Sin^{2}(θ) |
Sin^{2}(2θ) |

θ_{12} |
0.584 | 33.5 | 0.304 | 0.846 |

θ_{13} |
0.158 | 9.1 | 0.0248 | 0.097 |

θ_{23} |
0.812 | 46.5 | 0.527 | 0.997 |

δ_{CP} |
-1.55 | -88.8 | ||

SUM | 0.00 | 0.0 |

The PMNS corresponding to these values can be visualized using Wolfram Alpha by clicking on the site below:

The 1-sigma uncertainty around the δ

_{CP}term is approximately -90 +/- 90 degrees. This means that there is a good chance that the value of δ_{CP}is between -180 and 0 degrees, and this also means that there is a really good chance that the exp(i*δ_{CP}) is non-real, which means that there can be CP violation due to neutrino mixing.So, using the data above, I've re-evaluated the eigenvalues and determinant of the PMNS maxtrix. I think that the eigenvalues way of viewing this data is better than listing the 4-parameters because one can visualize how close the eigenvalues are to the unit circle. If the eigenvalues fall on the unit circle, then the PMNS matrix is the neutrino mixing is complete (i.e. the mixing is only into these states.) If eigenvalues all fall outside the unit circle, then there is growth in the total number of neutrinos, and if the eigenvalues all fall within the unit circle, then there is decay in the total number of neutrinos.

As seen below, the eigenvalues fall very close to the unit circle (especially if including uncertainty... which is not shown in the figure below...the size of the markers does not correspond to uncertainty in the value of the eigenvalues.) Using previous data, the eigenvalues fall nearly exactly on the unit circle, whereas using the T2K2015+PDG2014 data, the eigenvalues fall slightly off of the unit circle (though, within 1-sigma uncertainty.) Interestingly, one of the eigenvalues is extremely close to 1+0i. The other two eigenvalues are close to the unit circle, but far away from 1+0i. The other thing to point out is that two eigenvalues far from 1+0i are not mirror images of each other on the unit circle. The fact that they are not mirror images is a sign that there is CP violation in the PMNS matrix. If there were no CP violation, one of these eigenvalues would be the complex conjugate of the other one: a +/- bi. The other thing to point is that the value of the determinant is using the new data is nearlly entirely Real valued and slightly less than 1 (Det=0.9505+0.001i). This is likely a sign that the values of the 4-parameters were chosen by T2K in a way that is not consistent, but there is also still the possibility that the non-unitary value of the determinant is due to the fact that there is another type of neutrino that mixes with the three main species. (This is just speculation because, as can be seen from the old wiki data, the eigenvalues fall very close to the unit circle when chosen consistently.)

Old Wiki Data

Angle | (Radians) |

θ_{12} | 0.587 |

θ_{13} | 0.156 |

θ_{23} | 0.670 |

δ_{CP} | -2.89 |

SUM | -1.48 |

## Tuesday, May 12, 2015

### Summary of the Case of ~ 7keV Sterile Neutrinos as Dark Matter

Update 8/15/2017

I just wrote a new post. I still think that dark matter is sterile neutrinos. However, I'm leaning now towards their masses being on the order of 10^10 GeV (ten to the ten GeV), i.e.

For more information on the predictions of Type I Seesaw models, see this Jan 2017 arxiv manuscript by Stephen F. King. This model appears to be quite predictive as far as being able to predict the angles for neutrino mixing as well as the CP violation term, delta. The CP violation term is very nearly equal to -90 degrees, which means near maximal CP violation.

A resonantly-produced-sterile-neutrino-minimal-extension to the SM is entirely consistent with all known particle physics and astrophysics data sets. Such a SM-extension means that there could only be a small number of adjustments required to the StandardModel of Particle Physics (SM) in order for the SM to explain all astrophysics data sets, i.e. to explain dark matter, dark energy and inflation. What could such a RP-sterile-neutrino-SM-extension model look like:

(1) There are no new particle-classes to be discovered (other than nailing down the mass of the light, active neutrinos and the heavier, sterile neutrinos)

(2) Not counting spin degeneracy: There were likely 24 massless particles before the electro-weak transition. After this transition, some of the particles acquire mass, and the symmetry is broken into:

1 photon, 3 gauge bosons (W+ / W- / Z ) and 8 gluons (12 Integer spin bosons in total)

6 quarks, 6 leptons, i.e. electron / neutrinos (12 Non-integer spin fermions in total)

Note that 24 is the number of symmetry operators in the permutation symmetry group S(4) and the classes within the S(4) symmetry group have sizes 1,3,8 (even permutations) and 6,6 (odd permutations.) This is likely not a coincidence. Given that there are 4 (known) forces of nature and 4 (known) dimensions of spacetime, the S(4) symmetry group is likely to appear in nature. (See Hagedorn et al. 2006)

(3) Higgs scalar field is the inflaton field required to produce a universe with: (a) near zero curvature (i.e. flat after inflation), (b) Gaussian primordial fluctuations, (c) scalar tilt of ~0.965 for the fluctuations, (d) a near-zero running of the scalar tilt, (e) small, but near zero tensor fluctuations, and (f) no monopoles/knots.

(4) The sum of the rest mass of the squared of the SM bosons is equal to the sum of the rest mass of the squared of the SM fermions, and that the sum of these two is equal to rest mass squared equivalent energy of the Higgs Field. In other words, during the electro-weak transition in which some particles acquire mass via the Higgs mechanism, half of the rest mass squared energy does towards bosons (H, W, Z) and half goes to fermions (e, ve, u, d, etc...). If this is the case, then there are constraints on the mass of any sterile neutrinos. In order to not effect this "sum of rest mass squared" calculation, the rest mass of any sterile neutrinos must be less than ~10 GeV. A keV sterile neutrino would have no effect on this "sum of rest mass squared" calculation.

So, it is entirely possible that there is no new physics (outside of the neutrino sector), provided that (a) the Higgs scalar field is the inflaton field, (b) sterile neutrinos are the dark matter particles, and (c) light active neutrinos are the cause of what we call dark energy. In the rest of this post, I summarize the case for ~7 keV resonantly-produced, sterile neutrinos as the main dark matter candidate. Note: some of these points, related to (b), can be found in the following papers by de Vega 2014 and Popa et al 2015.

I just wrote a new post. I still think that dark matter is sterile neutrinos. However, I'm leaning now towards their masses being on the order of 10^10 GeV (ten to the ten GeV), i.e.

For more information on the predictions of Type I Seesaw models, see this Jan 2017 arxiv manuscript by Stephen F. King. This model appears to be quite predictive as far as being able to predict the angles for neutrino mixing as well as the CP violation term, delta. The CP violation term is very nearly equal to -90 degrees, which means near maximal CP violation.

__Original Post__A resonantly-produced-sterile-neutrino-minimal-extension to the SM is entirely consistent with all known particle physics and astrophysics data sets. Such a SM-extension means that there could only be a small number of adjustments required to the StandardModel of Particle Physics (SM) in order for the SM to explain all astrophysics data sets, i.e. to explain dark matter, dark energy and inflation. What could such a RP-sterile-neutrino-SM-extension model look like:

(1) There are no new particle-classes to be discovered (other than nailing down the mass of the light, active neutrinos and the heavier, sterile neutrinos)

(2) Not counting spin degeneracy: There were likely 24 massless particles before the electro-weak transition. After this transition, some of the particles acquire mass, and the symmetry is broken into:

1 photon, 3 gauge bosons (W+ / W- / Z ) and 8 gluons (12 Integer spin bosons in total)

6 quarks, 6 leptons, i.e. electron / neutrinos (12 Non-integer spin fermions in total)

Note that 24 is the number of symmetry operators in the permutation symmetry group S(4) and the classes within the S(4) symmetry group have sizes 1,3,8 (even permutations) and 6,6 (odd permutations.) This is likely not a coincidence. Given that there are 4 (known) forces of nature and 4 (known) dimensions of spacetime, the S(4) symmetry group is likely to appear in nature. (See Hagedorn et al. 2006)

(3) Higgs scalar field is the inflaton field required to produce a universe with: (a) near zero curvature (i.e. flat after inflation), (b) Gaussian primordial fluctuations, (c) scalar tilt of ~0.965 for the fluctuations, (d) a near-zero running of the scalar tilt, (e) small, but near zero tensor fluctuations, and (f) no monopoles/knots.

(4) The sum of the rest mass of the squared of the SM bosons is equal to the sum of the rest mass of the squared of the SM fermions, and that the sum of these two is equal to rest mass squared equivalent energy of the Higgs Field. In other words, during the electro-weak transition in which some particles acquire mass via the Higgs mechanism, half of the rest mass squared energy does towards bosons (H, W, Z) and half goes to fermions (e, ve, u, d, etc...). If this is the case, then there are constraints on the mass of any sterile neutrinos. In order to not effect this "sum of rest mass squared" calculation, the rest mass of any sterile neutrinos must be less than ~10 GeV. A keV sterile neutrino would have no effect on this "sum of rest mass squared" calculation.

So, it is entirely possible that there is no new physics (outside of the neutrino sector), provided that (a) the Higgs scalar field is the inflaton field, (b) sterile neutrinos are the dark matter particles, and (c) light active neutrinos are the cause of what we call dark energy. In the rest of this post, I summarize the case for ~7 keV resonantly-produced, sterile neutrinos as the main dark matter candidate. Note: some of these points, related to (b), can be found in the following papers by de Vega 2014 and Popa et al 2015.

## Tuesday, May 5, 2015

### Mysterious Cold Spot in the CMB: Still a mystery

*Summary: A research group has recently suggested that a supervoid can explain the Cold Spot in the CMB. The problem is that a supervoid (via the ISW effect) can't explain the actual Planck TT data.*

There has been a lot of attention over the last decade to a particularly large Cold Spot in the CMB, as seen both by WMAP and Planck (Image from this article.) Though, the Cold Spot is somewhat hard to see in the Planck data without a circle around it because there are so many "large-scale cold spots." The mystery behind the famed Cold Spot in the CMB is that the cold region is surrounded by a relatively hot region, and there is a difference of ~ 70 µK between the core of the cold spot and the surround region. Typical variations between locations these small is only 18 µK.

The two images directly above are from Planck 2015 results. The top of these two figures is the poliarization data, and the bottom of the top is the temperature data. Note that the scale goes from -300 µK to +300 µK.

While this new finding of a massive, supervoid of galaxies in the region near the Cold Spot is interesting, it should and has already been be pointed out that such a supervoid can't explain the ∼ -100 µK cold spot in the CMB via the standard ISW effect. As stated in the article "Can a supervoid explaint he Cold Spot?" by Nadathur et al., a supervoid is always disfavoured as an explanation compared with a random statistical fluctuation on the last scattering surface. There's just not enough of a void to explain the Cold Spot because the temperature would only be ∼ -20µK below the average temperature due to the late-time integrated Sachs-Wolfe effect (ISW.) Nadathur et al. state, "We have further shown that in order to produce ∆T ∼ −150 µK as seen at the Cold Spot location a void would need to be so large and so empty that within the standard ΛCDM framework the probability of its existence is essentially zero." The main argument against the supervoid-only explanation can be seen in the Figure by Seth Nadathur on his blog post regarding the paper he first-authored on this topic.

## Wednesday, April 15, 2015

### Repulsive keV Dark Matter

The case for 2-10 keV mass dark matter has gotten a lot stronger in 2015.

First, as mentioned in the previous post, the Planck 2015 results significantly lowered the value of the optical depth for photon-electron scattering during reionization and significantly lowered the z-value at which reionization occurred. Effectively, this pushes back the time at which the first stars and galaxies formed, and therefore indirectly suggests that dark matter took longer to clump together than predicted by GeV cold dark matter theories. As can be seen in the last figure in that previous post, a lower value of optical depth is possible for thermal relics with rest masses of ~2-3 keV and is incompatible (at 1-2 sigma) with CDM theories.

Second, just today, it was announced that there is a good chance that dark matter is actually self-repulsive. (Of course, it's been know indirectly for awhile that dark matter is self-repulsive because there is a missing core of dark matter in the center of galaxies...which can be explained by fermion repulsion between identical particles.) The news today is that there appears to be repulsion between the dark matter halos in a 'slow collision.' This should be contrasted with the lack of repulsion when two dark matter halos (such as in the Bullet Cluster) collide in a 'fast collision.'

So how do we reconcile all of this information? Actual, the answer is quite simple.

Dark matter halos are made of Fermi particles of keV rest mass that are quantum degenerate when their density is high and non-degenerate when their density is low.

When two fermi degenerate halos of uncharged-particles collide with velocities much greater than their Fermi velocity, the clusters of particle pass right through each other. Pfenniger & Muccione calculated what would happen in collisions between Fermi particles (or we could imagine two degenerate halos...doesn't matter provided that we are talking about a degenerate halo of particles or a single particle.)

To quote Pfenniger & Muccione: "An interesting aspect developed for example by Huang (1964, chap. 10), is that the first quantum correction to a classical perfect gas... caused purely by the bosonic or fermionic nature of the particles is

φ(r) = −kT ∙ ln [1 ± exp (−2π r

When going from a two-particle collision to a collision between 2 halos, the main difference is that the deBroglie wavelenth of the particle would be replaced by the effective degeneracy radius of the halo.

When the directed velocity is large compared with the thermal velocity of the cluster, then the Fermi clusters pass right through each other. The center of mass is nearly same as if these were classical particles. In other words, there would be no separation between the center of the dark matter mass and the center of the solar matter mass.

In the next case below, the directed velocity of the two particles (or clusters) is decreased 3-fold. In this case, there is some slight repulsion between clusters. In this case, there would be a slight separate between the center of the dark matter mass and the center of the non-dark matter mass because the solar matter mass will pass through unaffected by the DM collision (unless there was actually a solar-solar collision...however unlikely.)

Finally, in the last case, the directed velocity of the two particles (or clusters) is decreased a further 3-fold. In this case, the particles have the time to interact, and can actually gravitationally coalesce and entangle.

This means that we should expect the "cross section of interaction" to depend greatly on how quickly the dark matter clusters are colliding.

First, as mentioned in the previous post, the Planck 2015 results significantly lowered the value of the optical depth for photon-electron scattering during reionization and significantly lowered the z-value at which reionization occurred. Effectively, this pushes back the time at which the first stars and galaxies formed, and therefore indirectly suggests that dark matter took longer to clump together than predicted by GeV cold dark matter theories. As can be seen in the last figure in that previous post, a lower value of optical depth is possible for thermal relics with rest masses of ~2-3 keV and is incompatible (at 1-2 sigma) with CDM theories.

Second, just today, it was announced that there is a good chance that dark matter is actually self-repulsive. (Of course, it's been know indirectly for awhile that dark matter is self-repulsive because there is a missing core of dark matter in the center of galaxies...which can be explained by fermion repulsion between identical particles.) The news today is that there appears to be repulsion between the dark matter halos in a 'slow collision.' This should be contrasted with the lack of repulsion when two dark matter halos (such as in the Bullet Cluster) collide in a 'fast collision.'

So how do we reconcile all of this information? Actual, the answer is quite simple.

Dark matter halos are made of Fermi particles of keV rest mass that are quantum degenerate when their density is high and non-degenerate when their density is low.

When two fermi degenerate halos of uncharged-particles collide with velocities much greater than their Fermi velocity, the clusters of particle pass right through each other. Pfenniger & Muccione calculated what would happen in collisions between Fermi particles (or we could imagine two degenerate halos...doesn't matter provided that we are talking about a degenerate halo of particles or a single particle.)

To quote Pfenniger & Muccione: "An interesting aspect developed for example by Huang (1964, chap. 10), is that the first quantum correction to a classical perfect gas... caused purely by the bosonic or fermionic nature of the particles is

*to an particle-particle interaction potential:***mathematically equivalent**φ(r) = −kT ∙ ln [1 ± exp (−2π r

^{2}/ λdB^{2})] ."When going from a two-particle collision to a collision between 2 halos, the main difference is that the deBroglie wavelenth of the particle would be replaced by the effective degeneracy radius of the halo.

When the directed velocity is large compared with the thermal velocity of the cluster, then the Fermi clusters pass right through each other. The center of mass is nearly same as if these were classical particles. In other words, there would be no separation between the center of the dark matter mass and the center of the solar matter mass.

Finally, in the last case, the directed velocity of the two particles (or clusters) is decreased a further 3-fold. In this case, the particles have the time to interact, and can actually gravitationally coalesce and entangle.

This means that we should expect the "cross section of interaction" to depend greatly on how quickly the dark matter clusters are colliding.

## Monday, March 23, 2015

### Concordance Cosmology? Not yet

The term "Concordance Cosmology" gets thrown a round a lot in the field of cosmology. So too does the term "Precision Cosmology."

However, I'm a little hesitant to use these terms when we don't know what is 95% of the matter/energy in the universe. Cosmologists use the term "Precision Cosmology" to describe the fact they can use data from a number of data sets to constraint variables, such as the rest mass of neutrinos, the spacetime curvature of the universe, or the number of neutrino species. However, many of these constraints are are only valid when assuming a certain, rather ad hoc model.

In many respects, this Standard Model of Cosmology, i.e. Lambda CDM, is a great starting point, and most people who use it as a starting point are fully aware of its weakness and eagerly await being able to find corrections to the model. The problem is that it's sometimes referred to as if it were one complete consistent model (or referred to as a complete model once there's this small tweak over here or over there.) However, LCDM is not consistent and is rather ad hoc. The goal of this post is to poke holes in the idea that there is a "Standard Model of Cosmology" in the same sense that there's a "Standard Model of Particle Physics." (Note that the SM of particle physics is much closer to being a standard model...with the big exception being the lack of understanding of neutrino physics, i.e. how heavy are neutrinos and is there CP violation in the neutrino sector?)

So, let's begin with the issues with the Standard Model of Cosmology: i.e. Lambda CDM:

(1) There is no mechanism for making more matter than anti-matter in Standard Model of Cosmology. The LCDM model starts off with an initial difference between matter and anti-matter. The physics required to make more matter than anti-matter is not in the model, and this data set (i.e. the value of the baryon and lepton excess fractions) is excluded when doing "Precision Cosmology."

(2) Cold Dark Matter is thrown in ad hoc. The mass of the dark matter particle is not in the model...it's just assumed to be some >GeV rest mass particle made in between the electro-weak transition and neutrino decoupling from the charged particles. The mechanism for making the cold dark matter is not consistent with the Standard Model of Particle Physics. So, it's interesting that the "Standard Model of Cosmology" so easily throws out the much more well known "Standard Model of Particle Physics." This means that there is no "Standard Model of Cosmo-Particle Physics."

There's also the fact that Cold Dark Matter over-predicts the number of satellite galaxies and over predicts the amount of dark matter in the center of galaxies. But once again, this data set is conveniently excluded when doing "Precision Cosmology" and, worse, the mass of the 'cold dark matter particle' is not even a free variable that Planck or other cosmology groups include in the "Standard Model of Cosmology." There are ten's of free variables that Planck uses to fit their data, but unfortunately, the mass of the dark matter particle is not one of the free variables.

(3) Dark Energy is a constant added to Einstein's General Theory of Relativity, and as such, it is completely ad hoc. The beauty of Einstein's General Theory of Relativity was its simplicity. Adding a constant to the theory destroys part of the simplicity of the theory.

It also appears that, if Dark Energy is not just a constant, then it's not thermodynamically stable (for most values of Wo/Wa.) (See the following article

http://arxiv.org/pdf/1501.03491v1.pdf)

So, this element of the "Standard Model of Cosmology" is an ad hoc constant added to GR. And while it's true that dark energy could just be the energy density of the vacuum of space-time, the particular value favored by LambdaCDM is completely ad hoc. The energy density of space-time appears to be on the order of (2 meV)^4. What's so special about 2 meV?

However, I'm a little hesitant to use these terms when we don't know what is 95% of the matter/energy in the universe. Cosmologists use the term "Precision Cosmology" to describe the fact they can use data from a number of data sets to constraint variables, such as the rest mass of neutrinos, the spacetime curvature of the universe, or the number of neutrino species. However, many of these constraints are are only valid when assuming a certain, rather ad hoc model.

In many respects, this Standard Model of Cosmology, i.e. Lambda CDM, is a great starting point, and most people who use it as a starting point are fully aware of its weakness and eagerly await being able to find corrections to the model. The problem is that it's sometimes referred to as if it were one complete consistent model (or referred to as a complete model once there's this small tweak over here or over there.) However, LCDM is not consistent and is rather ad hoc. The goal of this post is to poke holes in the idea that there is a "Standard Model of Cosmology" in the same sense that there's a "Standard Model of Particle Physics." (Note that the SM of particle physics is much closer to being a standard model...with the big exception being the lack of understanding of neutrino physics, i.e. how heavy are neutrinos and is there CP violation in the neutrino sector?)

So, let's begin with the issues with the Standard Model of Cosmology: i.e. Lambda CDM:

(1) There is no mechanism for making more matter than anti-matter in Standard Model of Cosmology. The LCDM model starts off with an initial difference between matter and anti-matter. The physics required to make more matter than anti-matter is not in the model, and this data set (i.e. the value of the baryon and lepton excess fractions) is excluded when doing "Precision Cosmology."

(2) Cold Dark Matter is thrown in ad hoc. The mass of the dark matter particle is not in the model...it's just assumed to be some >GeV rest mass particle made in between the electro-weak transition and neutrino decoupling from the charged particles. The mechanism for making the cold dark matter is not consistent with the Standard Model of Particle Physics. So, it's interesting that the "Standard Model of Cosmology" so easily throws out the much more well known "Standard Model of Particle Physics." This means that there is no "Standard Model of Cosmo-Particle Physics."

There's also the fact that Cold Dark Matter over-predicts the number of satellite galaxies and over predicts the amount of dark matter in the center of galaxies. But once again, this data set is conveniently excluded when doing "Precision Cosmology" and, worse, the mass of the 'cold dark matter particle' is not even a free variable that Planck or other cosmology groups include in the "Standard Model of Cosmology." There are ten's of free variables that Planck uses to fit their data, but unfortunately, the mass of the dark matter particle is not one of the free variables.

It also appears that, if Dark Energy is not just a constant, then it's not thermodynamically stable (for most values of Wo/Wa.) (See the following article

http://arxiv.org/pdf/1501.03491v1.pdf)

So, this element of the "Standard Model of Cosmology" is an ad hoc constant added to GR. And while it's true that dark energy could just be the energy density of the vacuum of space-time, the particular value favored by LambdaCDM is completely ad hoc. The energy density of space-time appears to be on the order of (2 meV)^4. What's so special about 2 meV?

## Wednesday, March 11, 2015

### The Two-Step Method for Controlling Inflation and Maintaining Steady Growth Rates

Given that I've focused a lot of attention recently on Dark Matter & Dark Energy, I've decided to switch gears and get back to topics of Energy&Currency.

The recent crisis in Russia has demonstrated the problems with basing a currency on any one commodity. In the case of Russia, approximately 26.5% of its GDP comes from the sale of petroleum products. The contribution of oil/tax taxes to the Russian government is roughly 50% of the total revenue for the government, which means that Russia's currency is strongly impacted by changes in the price of oil/natural gas.

But not all oil/gas producing countries are feeling the shock of low gas prices. The key to avoiding the shocks is to make sure that a large portion of the revenue from oil/gas needs is invested into stocks/bonds of companies/governments that will benefit from lower oil/gas prices. So, it's fine for a oil/gas-producing country to be specialized in one area production, provided that its revenue goes into investments that will make money when oil/gas prices drop.

So, this leads me to the question I've been trying to answer for years: how can a country maintain constant inflation rates while also maintain steady growth rates?

There are some bad options available: (a) Gold-based currency (b) Fiat currency without rules (c) Any currency based by only one commodity...such a PertroDollars

Second, I'd like to discuss the problem with leaving the control of the currency to just a Federal Board of Bankers. For example, there is a famous economist at Stanford named John Taylor. (You can check out his blog here.) He is credited with inventing the Taylor Rule to determine how Federal Reserves should change the interest rate as a function of the inflation rate and the growth rate of the economy.

While I'm a proponent of making the Federal Reserve rule-based, there is a clear flaw in John Taylor's Rule for controlling inflation and GDP:

There are two measured, independent variables (inflation and real GDP growth), but only one controlled, dependent variable (the federal funds rate.)

As such, the Taylor rule is doomed to fail. You can't control the fate of two, independent variables by changing only one dependent variable. In order to control both the inflation rate and the real GPD growth rate, then you need two free variables. The focus of the rest of this blog is on how to use 2 input variables to control the 2 output variables (inflation and real GDP growth rate.)

You need two keys, and the keys should be held by different people.

The recent crisis in Russia has demonstrated the problems with basing a currency on any one commodity. In the case of Russia, approximately 26.5% of its GDP comes from the sale of petroleum products. The contribution of oil/tax taxes to the Russian government is roughly 50% of the total revenue for the government, which means that Russia's currency is strongly impacted by changes in the price of oil/natural gas.

But not all oil/gas producing countries are feeling the shock of low gas prices. The key to avoiding the shocks is to make sure that a large portion of the revenue from oil/gas needs is invested into stocks/bonds of companies/governments that will benefit from lower oil/gas prices. So, it's fine for a oil/gas-producing country to be specialized in one area production, provided that its revenue goes into investments that will make money when oil/gas prices drop.

So, this leads me to the question I've been trying to answer for years: how can a country maintain constant inflation rates while also maintain steady growth rates?

There are some bad options available: (a) Gold-based currency (b) Fiat currency without rules (c) Any currency based by only one commodity...such a PertroDollars

Second, I'd like to discuss the problem with leaving the control of the currency to just a Federal Board of Bankers. For example, there is a famous economist at Stanford named John Taylor. (You can check out his blog here.) He is credited with inventing the Taylor Rule to determine how Federal Reserves should change the interest rate as a function of the inflation rate and the growth rate of the economy.

While I'm a proponent of making the Federal Reserve rule-based, there is a clear flaw in John Taylor's Rule for controlling inflation and GDP:

There are two measured, independent variables (inflation and real GDP growth), but only one controlled, dependent variable (the federal funds rate.)

As such, the Taylor rule is doomed to fail. You can't control the fate of two, independent variables by changing only one dependent variable. In order to control both the inflation rate and the real GPD growth rate, then you need two free variables. The focus of the rest of this blog is on how to use 2 input variables to control the 2 output variables (inflation and real GDP growth rate.)

You need two keys, and the keys should be held by different people.

## Tuesday, March 10, 2015

### Review of "A World Without Time" and an Argument against Time Travel (Neutrino Drag)

Over the holidays, I read a book written back in 2005 titled "A World Without Time."

It's a good read for over the holidays because the first half of the book is largely biographical chapters on Albert Einstein and Kurt Gödel, and the second half is a step-by-step presentation of Godel's argument that, if the General Theory of Relativity is true, then our perceived "flow of time" is not real.

This is an interesting argument, so I'd like to discuss it further in this post. It's actually quite similar to the argument that has been made by Julian Barbour for the last couple of decades. (In this prior post, I discuss Dr. Barbour's latest addition to his last-standing argument that there is no such thing as time because 'time' in General Relativity is nothing more than another spatial dimension.)

So, let's look at Kurt Gödel's argument in more detail:

What Kurt Gödel did was to build a hypothetical universe that was consistent with GR. (While the universe was nothing like our universe, it was entirely consistent with the laws of GR.) In this hypothetical universe, there were closed space-time paths. In other words, there were closed space-time paths in the same way that the Earth has a 'essentially' closed space path around the Sun. In other words, on this closed space-time path you would wind up right back where you started. Meaning that you could revisit the past and it would be exactly the same as before,

Kurt Gödel then argued that in this hypothetical universe, there can be no such thing as 'flow of time' because you could easily go back in time or forward in time, just as easily as you can go left or go right at a T-intersection.

Kurt Gödel then argued that, since the 'flow of time' does not exist in this universe and since this universe is entirely consistent with General Relativity, then the 'flow of time' does not exist in our universe because our universe is governed by the laws of General Relativity.

So, I think that this is a valid argument, except for the last step. The problem with this last step is that there are four laws of physics in our universe (gravity, E&M, weak nuclear, and strong nuclear.) I would agree with Kurt Gödel if the only law of physics had been gravity, but the weak nuclear force just doesn't cooperate so easily.

It's well known that the weak nuclear force violates both CP and T symmetry. In other words, there is an arrow of time associated with the weak nuclear force, and this arrow of time does not exist in the other forces of nature. So, what keeps us from ever being able to make a closed space-time path is the weak nuclear force, because we are ultimately surrounded by particles that interact via the weak nuclear force, i.e. neutrinos (and perhaps dark matter particles interact via the weak nuclear force.) While the interaction of space-ships with neutrinos is negligible at normal velocities, the interactions is extreme when the space-ship starts moving at relativistic velocities...i.e. having a directed energy per nucleon on the ship of around ~10 GeV.) For example, I calculated that, when traveling at a speed where your kinetic energy is equal to your rest mass energy, then the protons in your start converting into neutrons at a rate of 1 proton every 2 milliseconds. (While this isn't particularly fast given the number of protons in your body, you can hopefully see that there's no way for you to travel anywhere near the speed of light without having neutrinos significantly destroy the structure of your body.)

So, there's no way to get a space-ship up to the required velocity/energy to create a closed space-time loop without bumping into neutrinos who will create a drag force on the space-ship as they bump into electrons. The irreversibly of drag (due to collisions with particles that interact via the weak nuclear force) is what prevents us from creating a closed space-time path.

But what if there were no neutrinos for use to run into? Would time travel be possible? (i.e. would a closed space-time path be possible?)

I would answer that that time travel would be possible in a world without the weak nuclear force. However, we live in a world with the weak nuclear force, and there is no way to get around it. In fact, the real question is: are the background neutrinos a requirement of our time asymmetric world?

I don't think that it's coincidental that we are surrounded by neutrinos (the very particles that prevent us from traveling back in time.) It's the neutrinos and dark matter particles that carry most of the entropy in the universe. It's the high-entropy, diffuse nature of neutrinos that pushes back against attempts to create closed space-time loops (just as it's impossible/difficult to create vortexes in extremely viscous liquids like honey.)

So, in summary, I think that Kurt Gödel has a valid point that there is no flow in General Relativity (alone.) But when you combine GR with the weak nuclear force, then space-time travel is not possible.

It's a good read for over the holidays because the first half of the book is largely biographical chapters on Albert Einstein and Kurt Gödel, and the second half is a step-by-step presentation of Godel's argument that, if the General Theory of Relativity is true, then our perceived "flow of time" is not real.

This is an interesting argument, so I'd like to discuss it further in this post. It's actually quite similar to the argument that has been made by Julian Barbour for the last couple of decades. (In this prior post, I discuss Dr. Barbour's latest addition to his last-standing argument that there is no such thing as time because 'time' in General Relativity is nothing more than another spatial dimension.)

So, let's look at Kurt Gödel's argument in more detail:

What Kurt Gödel did was to build a hypothetical universe that was consistent with GR. (While the universe was nothing like our universe, it was entirely consistent with the laws of GR.) In this hypothetical universe, there were closed space-time paths. In other words, there were closed space-time paths in the same way that the Earth has a 'essentially' closed space path around the Sun. In other words, on this closed space-time path you would wind up right back where you started. Meaning that you could revisit the past and it would be exactly the same as before,

Kurt Gödel then argued that in this hypothetical universe, there can be no such thing as 'flow of time' because you could easily go back in time or forward in time, just as easily as you can go left or go right at a T-intersection.

Kurt Gödel then argued that, since the 'flow of time' does not exist in this universe and since this universe is entirely consistent with General Relativity, then the 'flow of time' does not exist in our universe because our universe is governed by the laws of General Relativity.

So, I think that this is a valid argument, except for the last step. The problem with this last step is that there are four laws of physics in our universe (gravity, E&M, weak nuclear, and strong nuclear.) I would agree with Kurt Gödel if the only law of physics had been gravity, but the weak nuclear force just doesn't cooperate so easily.

It's well known that the weak nuclear force violates both CP and T symmetry. In other words, there is an arrow of time associated with the weak nuclear force, and this arrow of time does not exist in the other forces of nature. So, what keeps us from ever being able to make a closed space-time path is the weak nuclear force, because we are ultimately surrounded by particles that interact via the weak nuclear force, i.e. neutrinos (and perhaps dark matter particles interact via the weak nuclear force.) While the interaction of space-ships with neutrinos is negligible at normal velocities, the interactions is extreme when the space-ship starts moving at relativistic velocities...i.e. having a directed energy per nucleon on the ship of around ~10 GeV.) For example, I calculated that, when traveling at a speed where your kinetic energy is equal to your rest mass energy, then the protons in your start converting into neutrons at a rate of 1 proton every 2 milliseconds. (While this isn't particularly fast given the number of protons in your body, you can hopefully see that there's no way for you to travel anywhere near the speed of light without having neutrinos significantly destroy the structure of your body.)

So, there's no way to get a space-ship up to the required velocity/energy to create a closed space-time loop without bumping into neutrinos who will create a drag force on the space-ship as they bump into electrons. The irreversibly of drag (due to collisions with particles that interact via the weak nuclear force) is what prevents us from creating a closed space-time path.

But what if there were no neutrinos for use to run into? Would time travel be possible? (i.e. would a closed space-time path be possible?)

I would answer that that time travel would be possible in a world without the weak nuclear force. However, we live in a world with the weak nuclear force, and there is no way to get around it. In fact, the real question is: are the background neutrinos a requirement of our time asymmetric world?

I don't think that it's coincidental that we are surrounded by neutrinos (the very particles that prevent us from traveling back in time.) It's the neutrinos and dark matter particles that carry most of the entropy in the universe. It's the high-entropy, diffuse nature of neutrinos that pushes back against attempts to create closed space-time loops (just as it's impossible/difficult to create vortexes in extremely viscous liquids like honey.)

So, in summary, I think that Kurt Gödel has a valid point that there is no flow in General Relativity (alone.) But when you combine GR with the weak nuclear force, then space-time travel is not possible.

## Wednesday, November 19, 2014

### Dark Matter Decaying into Dark Energy

Update Aug 14 2015: (I found a paper written in April 2015 that models dark matter decaying into relativistic matter...such as light neutrinos. There are some tight constraints on this model.)

I was quite excited to see today that IOP PhysicsWorld had an article today on Dark Matter decaying into Dark Energy. The article discusses a recently accepted paper by Salvatelli et al. in PHYSICAL REVIEW LETTERS.

The gist of this recent PRL paper by Salvatelli et al is the following: the tension between Planck's CMB data using a LambdaCDM model and many other data sources, such as Ho (Hubble constant at z=0) measurements by...you guessed it...the Hubble Space Telescope, can be resolved in a model in which dark matter decays into dark energy (but only when this interaction occurs after a redshift value of 0.9.) There has been a major problem reconciling the low value of Ho estimated by Planck's CMB data (Ho = 67.3 +/- 1.2) with the much higher value measured by the Hubble Space Telescope (Ho = 73.8 +/- 2.4 .)

However, when using a model in which dark matter can decay into dark energy, and when using RSD data on the fluctuations of matter density (as a function of the redshift, z), then the Planck estimate of the Hubble constant at z=0 becomes Ho = 68.0 +/- 2.3. This new model eases the tension between the Planck data and the Hubble Space Telescope measurement of Ho.

So, let's go into the details of the model:

(1) Dark matter can decay into dark energy (or vice versa is also possible in the model)

(2) The interaction between dark matter and dark energy is labeled 'q' in their model. When 'q' is negative, then this means that dark matter can decay in dark energy. When 'q' is positive, then this means that dark energy can decay in dark matter. And when 'q' is zero, then this is no interaction.

(3) The group has binned 'q' into a constant value over different periods of time.

Bin#1 is 2.5 < z < primordial epoch (in other words, from the Big Bang until ~5 billion years after the Big Bang)

Bin#2 is 0.9 < z < 2.5 (in other words, from ~5 billion years after the Big Bang to )

Bin#3 is 0.3 < z < 0.9

Bin#4 is 0.0 < z < 0.3 (i.e. most recent history)

The best fit values of these parameters are the following: (See Table I and Fig 1 of their paper for the actual values)

q1 = -0.1 +/- 0.4 (in other words, q1 is well within 1 sigma away from zero)

q2 = -0.3 +0.25 - 0.1 (in other words, q2 is only roughly 1 sigma away from zero)

q3 = -0.5 +0.3 - 0.16 (in other words, q3 is roughly 2 sigma away from zero)

q4 = -0.9 +0.5 - 0.3 (in other words, q3 is roughly 2 sigma away from zero)

There is a trend that q(z) becomes more negative as z gets closer to its value today of z=0.

I was quite excited to see today that IOP PhysicsWorld had an article today on Dark Matter decaying into Dark Energy. The article discusses a recently accepted paper by Salvatelli et al. in PHYSICAL REVIEW LETTERS.

The gist of this recent PRL paper by Salvatelli et al is the following: the tension between Planck's CMB data using a LambdaCDM model and many other data sources, such as Ho (Hubble constant at z=0) measurements by...you guessed it...the Hubble Space Telescope, can be resolved in a model in which dark matter decays into dark energy (but only when this interaction occurs after a redshift value of 0.9.) There has been a major problem reconciling the low value of Ho estimated by Planck's CMB data (Ho = 67.3 +/- 1.2) with the much higher value measured by the Hubble Space Telescope (Ho = 73.8 +/- 2.4 .)

However, when using a model in which dark matter can decay into dark energy, and when using RSD data on the fluctuations of matter density (as a function of the redshift, z), then the Planck estimate of the Hubble constant at z=0 becomes Ho = 68.0 +/- 2.3. This new model eases the tension between the Planck data and the Hubble Space Telescope measurement of Ho.

So, let's go into the details of the model:

(1) Dark matter can decay into dark energy (or vice versa is also possible in the model)

(2) The interaction between dark matter and dark energy is labeled 'q' in their model. When 'q' is negative, then this means that dark matter can decay in dark energy. When 'q' is positive, then this means that dark energy can decay in dark matter. And when 'q' is zero, then this is no interaction.

(3) The group has binned 'q' into a constant value over different periods of time.

Bin#1 is 2.5 < z < primordial epoch (in other words, from the Big Bang until ~5 billion years after the Big Bang)

Bin#2 is 0.9 < z < 2.5 (in other words, from ~5 billion years after the Big Bang to )

Bin#3 is 0.3 < z < 0.9

Bin#4 is 0.0 < z < 0.3 (i.e. most recent history)

The best fit values of these parameters are the following: (See Table I and Fig 1 of their paper for the actual values)

q1 = -0.1 +/- 0.4 (in other words, q1 is well within 1 sigma away from zero)

q2 = -0.3 +0.25 - 0.1 (in other words, q2 is only roughly 1 sigma away from zero)

q3 = -0.5 +0.3 - 0.16 (in other words, q3 is roughly 2 sigma away from zero)

q4 = -0.9 +0.5 - 0.3 (in other words, q3 is roughly 2 sigma away from zero)

There is a trend that q(z) becomes more negative as z gets closer to its value today of z=0.

Subscribe to:
Posts (Atom)