Wednesday, April 16, 2014

Cold Dark Matter is an Oxymoron

(Note that this is a continuation of previous post in which I point out that Heavy Dark Matter is an Oxymoron.)

Anybody else tired of the science media jumping on every piece of evidence for Cold Dark Matter, and turning it into possible evidence for string theory, supersymmetry, and the multiverse??? I wish that the scientific journalists at Scientific American and New Scientist thought critically about the physics news that they are reporting. How can a particle be heavier than a proton, but have no electric charge or strong nuclear 'charge.'
Cold Dark Matter is an oxymoron because the rest mass of a particle is related to its capability to interact with other particles and/or fields (especially the Higgs field.) Heavier particles have more interactions with other particles, whereas lighter particles have less interactions. Mass is proportional to the number and strength of the particles interactions with other particles. Saying the words "Cold dark matter" is like saying the words "Skinny fat people." It just doesn't make sense because a particle can't be both heavy in mass but light in interactions.  (Note that this is also why I think that supersymmetry and any supersymmetric string theories are silly...if your theory invents new particles that are really heavy, but hardly interact with anything...such as gravitinos or neutralinos...then please throw your theory away and start from scratch. You are missing the whole point...mass related to capability to interact.)

But let's step back for a second, and ask the question: what are the implications of GeV dark matter?

In order to have GeV dark matter, you need to explain the following:
(1) Why there's no evidence for the GeV dark matter particles in any of the particle collider experiments? Why haven't we seen any of these particles when we collide together matter/anti-matter pairs with TeV of energy?
(2) Why doesn't the GeV dark matter just clump together at the center of galaxies?  The reason that we invented the concept of dark matter was to explain the higher than expected velocity of stars on the outer-edge of galaxies (and of higher than expected velocity of entire galaxies rotating about each other.)
GeV cold dark matter would just clump together because there's nothing (except Fermi-Dirace statistics and perhaps the weak force) to keep the particles from clumping together into an extremely tight ball. The fact that the recent "evidence" for GeV dark matter is coming from GeV gamma ray emission only in the center of the galaxy is a tell-tale sign that it's not coming from dark matter, but rather that it's coming from objects with extreme temperatures.
(3) According to astrophysical observation, there's no spike in the density of dark matter in the center of galaxies. Dark matter is actually quite diffuse in galaxies, and even extends out past where there's no more stars. So, why would there be spike in the GeV emission at the center of galaxies?  (It's not due to dark matter collisions, or else it would be diffuse throughout the galaxy.)

Saturday, March 1, 2014

What is the cause of the Arrow of Time?

This is a dialogue between a Sophist and a Platonist. The topic of the dialogue is: What is the cause of the arrow of time?

The participants of this dialogue are: Socrates and Sean Carroll

Location:  This dialogue takes place in a coffee shop near the ocean in California

Socrates:  Sean, you seem to be saying that the laws of physics are all time reversible, but that the motion of particles can still be time asymmetric. If I understand your argument, then you are saying that we can tell past from future, at least right now, because the future will have higher entropy than the past. You seem to state that this is due to the fact that it is more probable for a system to be in a state of high entropy rather than low entropy.

Sean Carroll: That's right. You have stated my position correctly. The universe started in a state of low entropy and gradually the entropy is increasing. The most probably state of the universe in the future is for it to be in a high state of entropy than the past. Though, if in the future, the universe reaches complete equilibrium, then we will see small fluctuations about this maximum value of entropy. Well, that is of course if there is such a thing as maximum entropy, and there is also the caveat that there might not be a 'we' to measure the entropy that far in the future.

Socrates: You are saying that time will continue to increase even after we reach equilibrium. I think that I understand your position. Let me rephrase what I think that you're saying:  If the state of the universe were probabilistic, and if you were to look at the state of the universe, then most of the time it should be in a state associated with the highest entropy. Though, if the state of the universe were probabilistic, then it might be possible for the universe to be far-from-this-maximum-entropy state. But tell me, Sean, why is the universe in a state so-very-far-from-this-maximum-entropy state?

Sean Carroll: That's because the universe started with a very-low entropy Big Bang. And the universe is still in the process of increasing its entropy. We are headed to a state of maximum entropy, but that is not for some time in the future, and perhaps, if the universe continues to expand, it might never happen. The entropy might just continue to increase as the universe increases.

Tuesday, February 18, 2014

7 keV sterile dark matter?

It's a good day when you wake up and see the U.S. medal in your favorite Winter Olympic sport (SBX), and you see a blog post at Resonannces with a good discussion about a topic of interest: dark matter.
The Resonnances blog post discusses a manuscript by Bulbul et al. recently uploaded to Arxiv about X-ray emission lines ~3.5 keV that can't be attributed to known atomic spectra. The authors of the manuscript attribute the emission to sterile dark matter particles with a mass of ~7 keV. Though, it should be noted that there are other, less likely, explanations to the emission at 3.5 keV. The manuscript discusses some of the other possible explanations. As seen below in the graph at the Resonannces website, the emission line at 3.5 keV is consistent with other experiments, and in region of parameter space that has yet to be ruled out.

(Image from http://resonaances.blogspot.com/)


What I'd like to add to the discussion is that this value of dark matter mass is very close to the 95% confidence window from computer simulations by  Horiuchi et al., whose 95% confidence window as 6-10 keV in one set of data and 8-13 keV in a second set of data. (shown below)

While there's still a large amount of uncertainty about what is the cause of dark matter, it appears that there is starting to be some convergence between experiments and computational simulations. And I hope that the recently submitted manuscript by Bulbul et al. will convince NASA to fund more research into analyzing X-rays in the ~0.5 keV to ~5 keV range as possible signals of sterile neutrinos decaying into fertile neutrinos. Of course, the term sterile and fertile neutrino are misnomers because sterile neutrinos aren't completely sterile (w.r.t. to the weak nuclear force) or else they wouldn't be able to decay to normal neutrinos, and it should be pointed out that normal neutrinos, electrons and quarks are not always fertile (w.r.t. to the weak nuclear force) because as they zig-and-zag, they go between being fertile and sterile.

I also want to point out that it does seem intuitively strange that "mostly" sterile neutrinos are heavier than the "mostly" fertile, normal neutrinos. This seems to violate the trend that the fundamental particles with more mass also have more forces with which they can interact. Therefore, it's important to point out that there is still a lot fundamental physics that we don't understand, even if it turns out that dark matter is ~7 keV sterile neutrinos.

Update: Here's a link to a paper by a separate group that also found a 3.5 keV signal in the X-ray spectra from two galaxies.

Wednesday, February 12, 2014

Evidence for Massive Neutrinos, which also Interact with the Earth

Just want to highlight the following research paper by physicists in the UK.

Massive neutrinos solve a cosmological conundrum


They estimate that the sum of the masses of neutrinos is 0.32 eV +/- 0.081 eV.
You have access to APS journals, you can find their paper here.

It's unclear to me what is the connection between this group's findings and the 2-10 keV  particle that seems to explain dark matter. So, I welcome feedback in the comments section.

I'd also like to highlight some recent research from Japan, showing that solar neutrinos interact with the Earth. In other words, as the solar neutrinos pass through the Earth, they can be convert from one type of neutrino into another type of neutrino faster than if the neutrinos were travelling through a vacuum.
Pretty cool that, once again, predictions using the Standard Model were confirmed experimentally!

Wednesday, February 5, 2014

Recent Experimental Measurements of the Weak Nuclear Force: Implications for the Arrow of Time

I wanted to highlight some recent experiments conducted at the Jefferson Lab in Virginia. The group measured the interactions of electrons with quarks, and was able to measure the weak nuclear interaction between these particles with greater precision than any previous experiments. (I'll link to the journal article as soon as it is published.)

They quantified the breaking of the mirror (P) symmetry of the weak nuclear force. Though, it should be point out that this type of measurement is not new. It has been know for a long time that the weak nuclear force violations P, as well as T & CP symmetry.
My main point in highlighting this research is that this measurement was much more precise than previous measurements and that this measurement is in agreement with the Standard Model of physics. (i.e. most data for the Standard Model and more data that reduces the likelihood that there is Beyond Standard Model Physics at the <10 p="" scale.="" tev="">
My secondary goal in highlighting this research is to highlight that the weak nuclear force is present in collisions between electrons and quark, which means that it's present any time molecules collide with sufficient velocity. This in turn means that the weak nuclear force is most likely the cause of the arrow of time.

Notice that we never see an arrow of time when there's only Boson particles or when Fermi particles are interacting only via Gravity, E&M or the Strong Nuclear Force.
(Try determining which way a movie is running for the following phenomena: superconductivity, superfluid helium, photons travelling in the vacuum of space, or planets orbiting a star.)
The arrow time only exists when there are Fermions interacting via the weak nuclear force.

As such, it's important for us to recognize that Boltzmann's assumption of molecular chaos is not required in order to obtain time-asymmetric equations of motion. You just need to include the weak nuclear force (which occurs only when Fermions collide with sufficient energy.)

I also wanted to let readers know that I'm working on a Socrates dialogue between a defender of Boltzmann's molecular chaos assumption and a defender of the theory that the weak nuclear force is the cause of the arrow of time. I'm hoping that, after reading this dialogue, one will be able to see the problems with the assuming that the reason for the arrow of time is that there is molecular chaos (i.e. randomization of velocities after collisions.) This assumption is quite useful for most problem of engineering interest; however, it's doesn't actual teach us what is the real cause of the arrow of time. (And therefore needs to be scrapped and replaced.)

The real cause of the (one and only) arrow of time is one time asymmetric term that shows up in the weak nuclear force. This means that the real way to determine rate-based coefficients (such as diffusivity, thermal conductivity, and electrical conductivity) is to include the weak-nuclear force into computer simulations of molecular models. Assuming molecule chaos gets us pretty close to the right answer, but it's likely that there are some cases where we can do a better job in predicting transfer coefficients using first-principles than in making Boltzmann's assumption of molecular chaos.

Friday, December 13, 2013

Partial Deregulation in Mexico's Energy Sector

I want to spread awareness of some breaking news today in Mexico.
The lower House of Representatives has passed a bill that allows foreign companies to own up to 50% of energy companies in Mexico. This includes oil, natural gas and electricity companies.
The bill still needs to pass in the Upper House of Representatives.
If you haven't read the news already, check out the following story by the LA Times.

I think that this bill is a step in the right direction. Government monopolies over the energy sector are never as effective as private companies, so I'm glad to see this partial deregulation of the oil/NG/electricity sector. However, it's only a partial step, and it doesn't do what's ultimately required for real positive change.

(To my knowledge) The bill doesn't give landowners back their mineral rights.
This has been one of the major problems in Mexico. The mineral rights are owned (and still will be owned) by the government.

The Current Law: Per the Federal Mexican Constitution, the Federal Mexican Government owns and holds all the mineral and petroleum resources located under the surface of the ground (In other words, the owner of land in Mexico only owns the surface thereof and any non-restricted treasure therein).

So, until the Mexican government gives the mineral rights back to the landowners, I'm slightly skeptical that we'll see huge increases in production of oil&gas in Mexico's.
The new law is a good first step, but it's only the beginning towards a free market.



(Side note:  I'm completely in favor of capping CO2 emissions from combusting fossil fuels. The reason I want to see free energy markets is that I think that we'll be drilling for oil&gas long after we stop emitting CO2 into the atmosphere because we still need oil&gas for making plastics. Also, we can capture and storage any CO2 generated at power plants. So, you can be pro-oil&gas development and pro-capping-CO2 emissions. The two are not exclusive.)

Sunday, November 24, 2013

Energy Currency vs. Bitcoin Currency vs. Fiat Currency vs. Gold Currency vs. Google Currency

Currency is a means to an end. The end is growth, happiness, knowledge, complexity, diversity, etc...
Currency is a mean to those ends because it allows people to collectively trade goods, i.e. through the use of currency I don't need to trade my engineering services directly with farmers in order to eat breakfast, lunch and dinner everyday. I can trade my engineering services with companies, who pay me in $dollars, and somewhere down the line, farmers trade their food for $dollars.
This is the most important purpose of a currency: a respected Medium of Exchange.

The other major purpose of a currency is not really a function of the currency as it is really a function of the investor. The question is: can the currency be invested into companies and/or banks so that one's investment in terms of real goods increases with time. In order words, the money invested into companies and banks should not have rapid fluctuations and it should increase with time, i.e. it should be a stable Storage of Wealth. This means that all sorts of growing companies need to accept payment of the currency. This also means that the currency must be safe from theft and that you can purchase stocks/bonds/homes using the currency with near-zero transaction fees.

As of the end of 2013, Bitcoin appears to satisfy only one of the two purposes of a currency: Medium of Exchange. (For more details on Bitcoin, check out the following YouTube video. It's the best summary of Bitcoin I've seen so far.) As far as a stable Storage of Wealth, Bitcoin has failed miserably...due to theft, price fluctuations, and arbitrary rules for increasing the number of Bitcoins in circulation. But this is not a complete problem, as long as you realize that you shouldn't be holding onto Bitcoins, but rather you should be investing your savings into projects with real, positive rates of return on investment, such as stocks and bonds. Where are the stable and growing Bitcoin-friendly banks, stocks and bonds?

I think that there are some novel aspects to Bitcoin as a currency, such as the innovative way of having all of the currency transactions recorded by the public and without a central organizing agency. I would like to see a currency like Bitcoin take off and become a global medium of exchange with near-zero friction (i.e. with near zero transaction fees.)

However, people who purchase Bitcoins should realize that there are some underlying problems with Bitcoin:
(1) There is currently an arbitrary limit to the number of Bitcoins. This expected limit is around 21 Million Bitcoins. (See graph below) The problem is that it's not clear what will be the incentive to secure the transactions (i.e. to mine Bitcoins) if there are no more Bitcoins available to be generated.


(2) To continue along point 1, the amount of currency should track the growth in the capability to generate useful work. New currency should be generated when self-replicating power plants are built, and not due to some arbitrary limit and not when gold/silver are mined out of the ground. I'd like to see an alternative to Bitcoin in which new currency is generated only when new power plants are built and only when people democratically vote to allow more currency (i.e. not just when Ben Bernanke says so.)

(3) Right now, new Bitcoins are instead generated when transactions occur, but there is no connection between the amount traded in transactions and the growth rate in useful work.

Wednesday, October 23, 2013

Highlights from the Last Few Weeks in Particle and Astro Physics News

It's been a roller-coaster month for many scientists. In the US, there was a government shut-down. And in the wider community, there have been a number of news article on some interesting, but inconclusive experimental finding. The goal of this post is to highlight the findings and give people links to the articles by Scientific American and New Scientist.

So, a list of some recent experimental findings:

(1) Dark Matter particles likely have rest mass between 8 keV and 14 keV
Horiuchi et al. recently published a paper that compares experimental measurements with dark matter theory suggests that the rest mass of dark matter particles is somewhere between 8 keV and 14 keV. Only in this range of values of the rest mass can the number of experimentally measured subhalo counts be predicted. (See Figure 2 and Table II from their paper below.) This appears to be strong experimental evidence against dark matter with rest mass values of MeV or GeV. I look forward to see more data collection and analysis along these lines.



Monday, October 14, 2013

The Hype this week from the National Ignition Facility

A number of blogs this week have been critical of the hype this week on BBC news that nuclear fusion researchers at the National Ignition Facility reached a "milestone."

The NIF is an exciting research facility because it allows us to understand nature better and because it helps us understand the D-T fusion process used for military applications. However, I'm afraid that what we've learned this week is that the science media (once again) goes after whatever can get hype.

So, let's be clear with what happened at NIF last month.

192 lasers generated photons that had 1.8 MJ of energy. (It should be pointed out that the NIF site consumed well more than 1.8 MJ of electricity to create the 1.8 MJ of photons. If this had been a semi-continuous event, the lasers would likely need at least 5 MJ to create the 1.8 MJ in photons.)

Of the 1.8 MJ in the photons, less than 14 kJ of energy reached the inside of the target as high energy X-rays. And 14 kJ of neutrons were generated from the reaction. Assuming that the neutrons are used to run a Rankine cycle power plant (same as for nuclear fission), then we are talking about roughly 5 kJ of electricity could be generated from the neutrons.

This means that the site spent ~5MJ of electricity to be able to perhaps obtain 5 kJ of electricity.

This means that NIF is three orders of magnitude away from "breakeven," and four orders of magnitude after from being "thermodynamically viable." This is far from a "milestone", and it is far away from what has already been achieved by the magnetically-confined fusion plasmas at JET.

I think that it's silly that NIF is trying to sell itself as an energy source of the future.

With that having been said, I want to point out that the idea of nuclear fusion is not a complete pipe dream. There is a possibly viable route to electricity production via magnetically-confined fusion plasmas, such as the still-being-built ITER experiment in Cadarache, France.

While this experiment is really expensive and there's still a chance that there's another plasma instability that will keep the system from reaching the real "breakeven" milestone (i.e. of generating more potential electricity from the neutrons than the electricity consumed to heat the plasma), I am proud that this facility is getting funding from world governments, including the US. The research at ITER is ground-breaking, and magnetically-confined fusion plasma is a potential energy source in the future if we can figure out how to control a few more of the instabilities have have appeared over the last ~60 years of research in this field.

I'd like to end this post by detailing some more information on some of the main engineering breakthroughs required before magnetically-confined fusion plasma can become "engineering" viable.

List of engineering breakthroughs required for magnetically-confined fusion plasma
(Also see slide 4 of the following presentation. The required engineering 'feats' or breakthroughs are well known. The required feats are all likely achievable...just really damn hard and require lots of upfront capital to do the research.)
(1) Controlling any instabilities that occur through alpha-heating  (i.e. there are likely to be instabilities due to the fact that the alpha particles emerge with energies on the order 4 MeV, but the core temperature of the plasma may only be 100's of keV.) The ability to control potential instabilities in nuclear fusion powered plasmas will be tested at ITER.
(2) Not-steady-state: Tokamak plasmas have a torodial electric field that must be applied by a time-varying magnetic field. This means that the process is inherently not-steady-state because you eventually need to change the direction of the electric field as you reach the maximum magnetic field that can be generated. This means that the plasma needs to be turned off (likely on a weekly/monthly basis), and then the current needs to be restarted in the opposite direction. An engineering 'feat' is required here to design a system that doesn't break during these scheduled start-ups / shut-downs (or a breakthrough is required in steady-state plasmas) and that isn't cost prohibitive. So far, the steady-state stellarators designs have been cost-prohibitive.
(3) The wall materials that can withstand high flux of ions, electrons, photons, and neutrons still need to be tested and proven to work. Also, the process for generating Tritium from Lithium needs to be demonstrated on a continuous basis.  (Note: there are plans to do this testing. I'm just point out that this has still been yet to demonstrated.)
(4) There are also a number of challenges associated with making cheap, super-conducting, high field magnets, with fueling the plasma, with removing heat, and with designing wall materials to withstand instabilities that release large amounts of energy to the wall while not releasing material from the wall that can end up cooling off the core of the plasma.

My overall conclusion (i.e. educated guess) is that magnetically-confined fusion plasma may be engineering-feasible sometime in the next 50 years, but it may not be economically competitive in the next 100 yrs. There's just too much uncertainty to known if magnetically-confined fusion will ever be economically viable against other sources of energy.

What can be stated with 99% certainty is that inertially-confined, laser-driven fusion is nowhere close to being engineering-viable or economically-viable. As a tax-payer in the US, I'd like to be able to vote for where my taxes goes. I would be willing to vote for magnetically-confined fusion plasma research, but I would not vote for my tax dollars to go to inertially-confined fusion research.

Sunday, October 6, 2013

A summary of why we need to globally reduce the emission of carbon dioxide into the atmosphere

What do coral reefs off of the coast of Australia, computer chip factories in Thailand, ski&snowboarding resort on the US east coast, and islands in the South Pacific all have in common? The answer is that all of these places are already feeling the negative impact of human-induced increases in the concentration of CO2 into the atmosphere.
The goal of this post is explain the science behind the effects of higher CO2 levels in the atmosphere, such as global warming, ocean acidification, and sea level rises. My hope is to explain in a somewhat less-technical manner the effects of higher CO2 concentrations in the atmosphere compared with the recent publication by the IPCC. There's nothing wrong with how the IPCC presents this information; it's just that I think that it's help for the information to be presented by the eyes of somebody who has no connection to those people who wrote the report or the papers cited in the report.
Unfortunately, the topic of CO2 emissions has become so politicized that the actual facts are easily swept under the rug of political ideology. Part of the problem is that environmental groups rarely discuss the actual science (and are quick to bash people who aren't alarmists), and the other part of the problem is clearly that there are people who refuse to accept that humans can affect the global climate, the ocean pH, or the sea level.  I consider myself a fairly moderate person and my goal here is to tell it as it is, regardless of how difficult it may or may not be to solve the problem of preventing major changes to Earth's climate, to Earth's average sea/ocean level, and Earth's average pH level in the seas/oceans.
So, before I get into the science, I'd like to state simply what the actual problem is that we face:
The problem:  Our global society is on pace to cause the temperature in Arctic and Antarctic to raise to the point at which we will likely see at least a 3 meter increase in sea levels. In addition, the higher concentration of CO2 in the atmosphere will cause lower pH levels in the ocean, which is harmful to major shell forming species, such as coral reefs. These are the straight-forward and indisbutable effects of higher concenrtations of CO2 in the atmosphere. There are also a number of other effects, of varying levels of certainty.

The Solution: The only realistic way to prevent major climate change, sea level change and pH change is to globally limit the emission of CO2 into the atmosphere. We can't "geo-engineer" our way out of this problem by throwing particulates into the atmosphere to scatter light from hitting the surface because this "solution" doesn't solve the fact that the pH of the ocean will continue to decrease if we were to continue to emit large amounts of CO2 into the atmosphere.

Sunday, September 29, 2013

Thoughts on "The Road to Reality"

It's been nearly a decade since Roger Penrose wrote "The Road to Reality." This weekend, I finally finished the book. (I had read individual chapters here and there, but I finally found the time to sit down and read the whole book.) The reason that I finally forced myself to read the whole book is that I wanted to see how many of his speculations in 2004 are still valid today. Also, Roger Penrose has some very interesting ways of describing mathematical theories, and he recognizes the ad hoc and incomplete nature of the current "Standard Model," but doesn't shy away from stating his negative opinions about supersymmetry and string theory.

The book is a breath-taking overview of fundamental physics and geometry from the perspective of a Platonist. What's refreshing about the book is the fact that it's a history of physics and mathematics from the view point of a Platonist (i.e. somebody who believes that mathematics...and perhaps beauty and morality...are eternal, unchanging, and exist eternal to the material and mental world.)


Three worlds, Three mysteries  p20&1029 "The Road to Reality."


What makes the book so refreshing to read is that, in the decade since this book was published, the "physics media" (i.e. Sean Carroll, Lawrence Krauss, Brian Greene, Martin Rees. Leonard Susskind, and others) have attempted to dismantle neo-Platonism and a belief in an unchanging, external world of absolutes. Post-modernism infected most of the social sciences in the 50s-70s, but physics and mathematics were still holding strong against post-modernism and relativism until the 2000s, at which point in time, the "physics media" began hyping string theory, supersymmetry, multi-verses, universes from nothing, randomness, inflation, time symmetric laws of physics, and the quantum randomness. Luckily, as "natural" string theories and supersymmetries have faced an timely demise due to falsification by high-energy particle collider experiments, it's easier to see that the emperor has no clothes.

Sunday, September 22, 2013

Road map for the Libertarian Party in the US

I'm writing this post because I'm in a state of disbelief over recent political changes taking place in Australia. What I mean by recent political changes is that the new prime minister of Australia has removed science and environmental ministers from his cabinet and has ended funded for an apolitical climate change working group. While I am a firm believer in limited government, I find the recent moves in Australia (along with some of the anti-science rhetoric in the tea party in the US) to be counter-productive to the cause of freedom. The goal of this post is to explain my beliefs on limited government (i.e. what should be funded by governments and what should not funded) and state my hope for the future of the Libertarian Party in the US.

My belief is that governments should fund public goods and should refrain from funding non-pubic goods. The strict economic definition of a public good is a good that is "non-excludable" and "non-rivalrous." The classic example of a public good is the military. National defense is "non-excludable" because there is no way to limit the benefits of a strong national defense only to those people who pay for the service. Also, national defense "non-rivalrous" because it does not get consumed (in the same way that hamburgers can be consumed.)

Another example of a public good is basic scientific knowledge.  Basic scientific knowledge can't be consumed and is not "less true" because somebody else learns the knowledge. It is also non-excludable because the knowledge can be transmitted on the internet with near-zero cost to anybody who is interested. It is virtually impossible for the scientists doing the research to keep the knowledge a secret because once they share the information, it can easily be put onto the internet and will spread like a wildfire. (Though, it should be pointed out that many forms of applied knowledge are not public goods. For example, knowledge of the amount of oil&gas in the ground in a specific location can be "consumed" and can be "less true" when somebody else learns this knowledge because this knowledge is not a constant with time.)

Saturday, September 21, 2013

Experimental updates on the Weak Nuclear Force (and predictions for time irreversible dynamics)

This week, the Q-weak experiment at the Jefferson Lab published some initial results from an experiment in which they scattered electrons off of protons in the form of liquid hydrogen. The experiment involved sending in electrons of one spin, measuring the scattering angles, and then sending in electrons of opposite spin to measure the different in the scattering angles due to the difference spin of the electrons. The weak nuclear force caused differences in the scattering of electrons off of protons depending on the spin of the electrons because the weak nuclear force is parity asymmetric. From this difference in scattering, the researchers were able to measure the weak nuclear coupling constant for electrons and protons. This was the first time that researchers have isolated the weak charge of the proton at low collision energies. The value of this coupling constant is in good agreement with the Standard Model of physics. The figure below (Figure 2 from their paper) shows the asymmetry of scattering (due to the weak nuclear force) as a function of the scattering energy squared. Notice in this figure that the value for the asymmetry at zero energy (i.e. near room temperature energies) is not zero. This means that the weak nuclear force has a non-zero effect at room temperature for electron-proton scattering.


Sunday, August 11, 2013

Is There a Correlation between Real Growth Rates and Inflation? Yes & No

Is there a correlation between real growth rates and inflation rates?
From the data analyzed and presented below, there is virtually no correlation between quarterly real growth rates and quarterly inflation rates if the inflation rate is between -5%/yr and +10%/yr. However, there is a strong negative correlation between decade average real growth rates and the standard deviation of decade long inflation. In other words, what matters for growth is not the actual inflation rate (provide that it is low.) What matters is having a low standard deviation in the monthly inflation rate for an entire decade (or more.) The goal of this post is to show how these results were calculated.

Consumer Price Index
I've analyzed the monthly inflation in the consumer price index from 1913 to 2012 and graphed the inflation rate at each month. There are 1200 data points in the graph below. In addition to the actual data, I've plotted some black lines that represent the average inflation rate over the decade...the average monthly inflation rate in units of [per month]. The grey lines represent standard deviation about the average inflation rate. The brown lines represent the average yearly inflation rate over a given decade, such as 1913-1922, ..., 203-2012. Roughly, not exactly, the brown lines are 12 times larger in value than the black lines.


Saturday, July 27, 2013

LLNL Energy Flow Charts: Energy Services vs. Useful Work

Each year the Lawrence Livermore National Laboratory publishes data on the flow of energy throughout the economy. Below is the latest graph. In general, it's a pretty useful graph and it conveys a lot of information. However, the goal of this post is highlight the underlying problem with LLNL's concept of "Energy Services."

https://www.llnl.gov/news/newsreleases/2013/Jul/images/28228_flowcharthighres.png

Sunday, July 21, 2013

Jigsaw pieces are falling to place: Neutrino Minimal Standard Model

It seems that the jigsaw pieces are really starting to fall into place, as far as proving the Neutrino Minimal Standard Model and as far as disproving supersymmetry and string theory.

For example, some data presented at the EPS-HEP Conference in Stockholm this week, is lending more evidence towards the Neutrino Minimal Standard Model and against supersymmetry (and hence against string theory as well.)

Discovery at LHC leaves less room for new particles by Symmetry Magazine
(This article discusses how the Standard Model passed an experimental test with flying colors. For more information, you can go directly to the LHCb website. There are other results on the b→sγ  transition on their website they recently presented at the same conference, but which hasn't received as much media attention, but could potentially be extremely important in constraining modification to the Standard Model.)

T2K experiment catches neutrinos in the act by Symmetry Magazine
(This article discusses the publication of an improved data set showing tau neutrinos converting into   electron neutrinos. This data now completes the experiments required to calculate the values inside of the PMNS matrix...i.e. the neutrino mixing matrix.)

Why is neutrino mixing important?  I'll answer this questions throughout the rest of this post.

Tuesday, July 16, 2013

Can you build a self-replicating coal power plant that doesn't emit CO2?

The question posed in the title of this post is yet unanswered. We still do not know if we can build a fossil fuel power plant whose emission of greenhouse gases is zero across its entire life cycle (i.e. construction, operation, and deconstruction.)
But with that having been said, we still don't know if we can build a solar PV or a wind turbine with zero life cycle greenhouse gas emissions. Nobody yet has proven that a solar PV panel or a wind turbine can self-replicate without replying on the existing fossil fuel based economy. (For example, today's solar PV technologies and wind turbines rely on relatively cheap gasoline and fossil fuel based electricity during the construction phase of operations.)
This is a problem because we need to limit the concentration of carbon dioxide in the atmosphere. (While there is still uncertainty on what should be the maximum allowable concentration in the atmosphere, the likely maximum should probably be in the range of 400-800 ppm.) Given that we are currently increasing the concentration of CO2 at a yearly rate of ~3 ppm and given that we have yet to demonstrate a single 'modern' technology that has a zero lifecycle emission of greenhouse gases, we need to get out acts together and start demonstrating technologies that can self-replicate without reliance on an economy that emits greenhouse gases. This means that solar PV projects can't count as truly self-replicating and truly zero GHG-emitting until the electricity from the solar panel and only from the solar panel is used power the factory and build the factory that makes the solar panels.

So, let me know get to the main point of this post: I think that there are ways to design self-replicating coal power plants that don't emit greenhouse gases and that have positive growth rates. While there are a lot of people researching the topic of carbon capture and sequestration at coal power plants, most people in this field stop short at the power plant and don't address the question of how to limit CO2 emissions from factories and steel mills that make products used to construct the power plant. In other words, who cares if you can design a coal power plant that doesn't emit CO2 during operation if there are significant amounts of CO2 emissions during the construction and deconstruction phases of the lifecycle? You can't determine if the system can self-replicate without CO2 emissions until you design and model the whole system (not just the power plant.) This means including the cars that the workers drive into your model of the system.

Therefore, we must start thinking about designing self-replicating power plants cycles that don't emit CO2 emissions. As such, I think that there is one type of coal power plant that is more likely to be able to self-replicate without CO2 emissions than other types of coal power plant. This type of coal power plant goes by the acronym IGCC-CCS. The acronym stands for Integrated Gasification Combined Cycle with Carbon dioxide Capture and Sequestration. (Note: Sequestration here refers to storing CO2 under the ground. It should also be noted that there are a number of natural CO2 reservoirs in the U.S and there are plenty of locations to safely store CO2 in the U.S.)

Sunday, July 14, 2013

Is Dark Matter 2 keV Sterile Neutrinos?

In various previous posts, I've been summarizing the recent evidence for dark matter particles in the keV range. Last month, there was a conference in Paris (Chalonge Meudon Workshop 2013) devoted specifically to the question of whether "warm" dark matter could explain all of the current data collected by astronomers on the distribution of dark matter in the universe. The presentations from the conference can be found online here.

Here's my summary of the presentations from the conference:
(1) The evidence for dark matter particles with a rest mass of 2 keV is getting stronger by each day. The leading candidate particle that matches this rest mass would be a right-handed (sterile) neutrino. However, there are still many unanswered questions, such as how a right-handed, sterile neutrino has a rest mass that is heavier than left-handed neutrinos and how there are so many sterile neutrinos.

Sunday, July 7, 2013

Socrates vs. JayZ: Are we awake or just sleeping?


JayZ's Answer:  Hustlers, we don't sleep, we rest one eye up

Socrates's Answer:  I, Socrates, don't have the answer, but I will gladly help you ask the question and help you determine whether your answers make sense logically. But when we ask the question, we should make sure to use a language of absolutes (such as mathematics) rather than a natural language (like English, in which it is easy for Sophists to confuse us.)

I've been re-reading the Theaetetus by Plato (one of my favorite Socratic dialogues). In this dialogue, Socrates is helping a young geometry student (Theaetetus) answer the question "What is knowledge?" and is helping him see the problem with his first answer "Knowledge is perception." To help see the problem with this answer, Socrates asks a series of questions, one of which is the question of how we know we are awake or dreaming: 

Soc. But then, Theaetetus, how can any one contend that knowledge is perception, or that to every man what appears is? 

Friday, June 14, 2013

The Wealth of Nations 2013

Yes, it’s that time of year again. It's summer time, and it's also the time of year when BP releases it updated data of world production and consumption of coal, oil, gas, and electricity. There are a few surprises this year...well surprises to me at least. But in general the overall trends are the same:  Brazil, India and China are growing while everybody else is stagnant. I'll start with the surprises, and then I'll present some graphs using data from the 2013 BP Statistical Review of World Energy workbook.


(1) The US economy (as measured in [TW-hrs] of electrical and mechanical work produced) decreased slightly in 2012 compared with 2011. This means that we are not putting the work we generate to good use. Countries like Brazil, India and China are putting the work they generate to good use. The US (like a lot of other countries) is getting ~0% return on work invested. The reason that this is a surprise to me is that I'm confusing the feeling of growth recently (since Nov 2012), and forgetting that in 2012 the US was focused on the 2012 election rather than trying to grow its economy. 
(2) Japan's economy grew by 3%. This was a surprise to me because they shut down all of their nuclear power plants. I was expecting to see their economy significantly shrink in size. So, you could understand why I was shocked to see Japan's rate of growth exceed the US's rate of growth. (To put it sarcastically, an election in the US can cause more economic damage than a tsunami that causes a country to shut down all of its nuclear power plants.)
(3) China's economy continues to catch up with the U.S. economy. China's rate of growth was 5%/yr compared with the -1%/yr for the US. If China maintain a rate of growth that is 6%/yr larger than the U.S., then in roughly a year and a half years China will be able to generate as much electrical and mechanical work as the U.S. This means that when I do this calculation again using the 2015 BP Statistical Review of World Energy, there is a good chance that China will have the world's largest economy in terms of economic output as measured in [TW-hrs].
(4) The purchasing power parity GDP (i.e. PPP GDP) is a pretty good reflection of the wealth of country, i.e. the capability to do mechanical and electrical work. However, the calculation of the GDP appears to be biased against a few countries, especially Canada and Russia. I can understand why it would be biased against Russia (i.e. black markets and collective farming), but I have no clue why the IMF and other world organizations consistently underestimate the size of Canada's economy.  (Any ideas?)

So, now I'm going to present the data in graphical form. If you are interested in seeing these graphs for prior years (as well as a discussion of the methodology), check out the graphs from previous posts in 2011 and in 2012 on the Wealth of Nations.