Monday, October 11, 2010

The meaning of life...Increasing the entropy of the universe

Okay, here's my train of logic for the meaning of life. It's quite long and rambles at times, but I think that the end result is valid from the starting assumptions. I've broken it down into "Conclusions", "Assumptions", and "Line of Reasoning."
Let me know what you think.

Conclusion: Life is a means of increasing the entropy of the universe. Life is a result of the fact that the equations of dynamics are non-linear, allow for self-replicating structures, and that the starting conditions of the universe are non-equilibrium. The goal of life is to bring the universe to equilibrium at a faster rate than if the equations of dynamics did not allow for life.
Therefore, we as living beings should be trying to increase the entropy of the universe. This means converting as much exergy (such as sunlight) into low grade energy as possible. There are other gradients of exergy that we can take advantage of as well (such as gradients in thermal energy, chemical potential and nuclear potential.) The means to do so are storing "information" (i.e. available electrical/mechanical work) so as to build devices that generate even more entropy. As biologist Stuart Kauffman stated by in "Reinventing the Sacred":

Cells do some combination of mechanical, chemical, electrochemical and other work and work cycles in a web of propagating organization of processes that often link spontaneous and non-spontaneous processes…Given boundary conditions, physicists state the initial conditions, particles , and forces, and solve the equations for the subsequent dynamics—here , the motion of the piston. But in the real universe we can ask, “Where do the constraints themselves come from?” It takes real work to construct the cylinder and the piston, place one inside the other, and then inject the gas…It takes work to constrain the release of energy, which, when released, constitutes work…This is part of what cells do when they propagate organization of process. They have evolved to do work to construct constraints on the release of energy that in turn does further work, including the construction of many things such as microtubules, but also construction of more constraints …Indeed, cells build a richly interwoven web of boundary conditions that further constrains the release of energy so as to build yet more boundary conditions.

There is a balance between using and storing available work (electrical or mechanical). Unfortunately, there is no way to determine what is the optimal balance between storing and using work that will bring the universe to equilibrium at the fastest rate. (i.e. there is no way to predict the fastest route to equilibrium because we can not calculate far enough into the future to determine which route is the fastest to equilibrium.) So, how does life determine which route to take?

It uses neural nets (with some information of past attempts) to estimate which route will bring the system to equilibrium the fastest. But there's no guarantee that it's the best route. Just as there's no guarantee that the answer to the traveling salesman problem is the optimal solution when using neural nets.

Restated: Life is a means of increasing the entropy of the universe and bringing the universe to a state of equilibrium at a faster rate than without life.

Assumptions: 1) Entropy increases due to collisions between particles because the forces of nature are not time reversible. 2) The universe started in a state of non-equilibrium. 3) The future can not be predicted because of the extreme non-linearity of the governing equations. 4) The dynamic equations of systems are highly non-linear and allow for self-replicating structures 5) The self-replicating attractors found in the dynamic equations has a two-fold effect: a) inability to predict the future, and b) ability to store both work and "information" (This self-replicating nature only occurs for systems far-from-equilibrium.)

Line of Reasoning:
Entropy is the number of microstates available to a given macrostate. Entropy defined this way is only valid for large numbers of particles because as N becomes larger (greater than 100,000), then the macrostate with the most microstates ends up being essentially the only macrostate with an probability of occurring. Another way of stating this is asking the question: what is the N-volume of the last dx of an N-D sphere. As N becomes greater than 100,000, then almost all of the volume of the N-D sphere is located at the edge of the N-D sphere.

For the universe, the macrostate is defined by the total energy and total momentum, which are conserved over time.
Assuming that this universe started with a Big Bang (i.e. all of the energy localized in one location), then this represents a state of low entropy. Even though the temperature would have been very high...and I mean almost unimaginably high, the energy would have been confined to a small region of space. There would not have been many microstates available compared with the microstates available today.

There existed a large gradient in energy at the start of the universe, between the location of energy and the rest of the open space in the universe. Diffusion of energy from a region of high energy to low energy would have started immediately.

It can be shown that entropy is defined for both systems in equilibrium and for systems not-in-equilibrium. (See pg 71 eq 6.4 of Grandy's "Entropy and the Time Evolution of Macrosopic Systems.) Since entropy is defined as the number of microstates for the given macrostate with the most microstates, it is a unitless variable. (Note that you can add dimension to entropy by multiplying by k or R. Its unitless definition is convenient because it means that it's relativistically invariant.)

The universe will always be in the given macrostate with the highest entropy because the number of microstates in the given macrostate is so large compared with neighboring macrostates.

The question is then: how does the universe evolve with time? How does it evolve into macrostates with even larger numbers of microstates? When we look around us, we see that there is always an increase in entropy, but most of us have a hard time understanding why.

At the beginning of the universe, the energy was confined to a small region and the probability of finding a "particle" in a certain region or a "field" with a given quanta of energy was larger than it is today. The probability of guessing the actual microstate of the universe near the Big Bang is a lot larger than the probability of guessing the microstate of the universe right now. This loss of information is a loss in the ability to predict the given microstate of the universe. If the number of microstates increases (i.e. entropy increases), then our ability to guess the actual microstate decreases. If we start a confined system in a given microstate, over time we lose information about the actual microstate.

For example, if we start a system with 1,000 particles on left side of a box and then remove the object constraining the particles to the left side of the box. We lose information about the actual microstate as the particles collide. Over time, our ability to predict the given microstate decreases, but at the same time, the symmetry of the system is increasing. Over time, the system will be in the macrostate with the most number microstates. This turns out to be the case in which there is left/right symmetry between the half-way point in the box.
The symmetry of the universe has increased, and then is part of a general trend that "the symmetry of the effect is greater than the symmetry of the cause." (i.e. the Rosen-Curie Principle) (Note that this principle is not violated by nonlinear phenomena, such as Rayleigh-Benard convection cells...it's the total symmetry of the universe that increases because of the increased heat conduction rate.)

The Rosen-Curie Principle is another way of stating the 2nd Law of Thermodynamics, i.e. the number of microstates of the macrostate with the most microstates is increasing with time. [Note: that this means that time can not be reversed. And since there is no symmetry with respect to time reversal, there is no conservation of entropy.]

Note: The idea of increasing symmetry is almost exactly opposite of what's taught to undergraduates in freshman level physics. They are taught that an increas in entropy is equal to an increase in "disorder." This is a incorrect statement and, worse, it's unquantifiable. How do you quantify "disorder" ? You can't. Instead, you can quantify "number of microstates for the macrostate with the larger number of microstates." It's dimensionless and it's relativistically-invariant. You can also quantify the number of symmetries.

So, now that we've seen that entropy increases, we can see where the universe is heading towards. It's heading towards a state of complete symmetry. The final resting state of the universe would be a homogenous state of constant temperature, pressure, chemical potential and nuclear potential. Depending on the size of the universe and questions regarding inflation and proton stability, this could be a state of dispersed iron (which is the state of lowest nuclear Gibbs free energy.) The actual value of the pressure, temperature, and chemical potential depend greatly on whether the universe will continue to just expand into the vastness of space.

If it continues to expand, there may never be a final state of equilibrium, but what we can say is that it will be more symmetric than it is today.

So, going back to the question of life, we have to ask: how does life fit into the picture? Where did life come from and what's the purpose?

Here is my round-about answer to that question. I'm going to address this question by going through the different levels of complexity as one moves from systems in equilibrium to systems far-from-equilibrium. My understanding is that systems far-from-equilibrium are trying to reach equilibrium at the fast possible rate that is allowed by the given constraints in the system.
I see the following levels of complexity:

1) Equilibrium (complete homogeneity) There is symmetry in time and symmetry in space (i.e. the pressure, temperature, electrochemical potential, etc. are constants and not varying with space or time)

2) Linear Non-equilibrium (Gradient in temperature, pressure, electrochemical potential, etc.) But the gradient is small so that the non-linear equation become linear. This is best seen with Ohm's law, in which the directed velocity of electrons due to the gradient of electrical potential is small compared to their thermal speed.

3) Non-linear Non-equilibrium of degree one: The non-linear equations allow for structures to appear that are time independent or time dependent structures that are space-independent. (Such as time independent Ralyeigh-Benard convection cells) This requires a non-linear term in the dynamical equations of the system.

4) Non-linear Non-equilibrium of degree two: The non-linear equations allow for structures to appear that are time dependent. (Such as time dependent Ralyeigh-Benard convection cells) This requires that the system be even further from equilibrium and for non-linear terms.

5) Non-linear Non-equilibrium of degree three/four: Chaotic motion of the system (This also requires a gradient in a potential and it requires that there be a term of cubic power in the equations of motion.) (An example would be a Ralyeigh-Benard cell driven with an extreme temperature gradient such that the cells fluctuate chaotically, i.e. a broad spectrum of frequencies) (We'll use degree three to represent structures with one positive eigenvalue and degree four to represent systems with multiple positive eigenvalues.)

6) Non-linear Non-equilibrium of degree five: Self-referential equations of motion for the system. For systems far-from-equilibrium, there is the possibility that equations can refer back to themselves. These are structures that are formed that can replicate. These structures require a source of exergy (such as a gradient in pressure, chemical potential, etc...) to replicate, but they don't immediately disappear when the source of exergy is turned off. This is due to the fact that exergy is stored within the structure itself. (By exergy, I mean the available work in moving a non-equilibrium system to equilibrium with its environment.) If there is no new source of exergy, then the structure will eventually stop moving and will eventually disappear, like a Ralyeigh-Benard cell disappears after the temperature gradient is removed. At equilibrium, all such structures will disappear.

These structures are capable of storing "information" (i.e. storing gradients in exergy that can be used to generate work), which can be used to generate more "information." "Information return on information investment" Though, the final goal is not more information...the final goal is equilibrium. The "information" is used to speed up the process of reaching equilibrium.

Summary:Equilibrium: no eigenvalues (i.e. no stable structures)
Linear-Non-Equilibrium: negative eigenvalues (i.e. no stable structures, such as  convection cells)
Non-linear Non-Equilibrium of order one and two: complex negative eigenvalues (convection cells can form)
Non-linear Non-Equilibrium of order three: one positive eigenvalue (strange attractor has a combination of positive and negative eigenvalues.)  (time-varying structures can form)
Non-linear Non-Equilibrium of order four: at least two positive eigenvalues (complex, time-varying structures can form)
Non-linear Non-Equilibrium of order five: the structure is not solvable, i.e. the group describing the eigenvalues is at least as complicated as the group A5 (which is the first nonabelian simple group.)(i.e. Living structures can appear when you are this far way from equilibrium.) The structure is more complicated than the "structure" of a Rayleigh-Benard cell because it has a level of self-reference that allows for replication. The group A5 is the most basic of the building block for order higher groups. One could say that life is about the building of nonabelian, simple structures that survive off a gradient in exergy in order to increase the rate at which the universe reaches equilibrium. (at which point, the structures disappear.)

When the equations of motion allow for "attractors" (i.e. dissipative structures) with symmetries that form a nonabealian, simple group, the structure is capable of replicating. For some higher level of symmetry, there must be the ability for the structure to store information, and it's unclear to me right now what level of group theory is required to allow for storage of exergy. What is clear is that some structures can store "available work" for later use.

The stored "available chemical or mechanical work" is used to overcome the activation energy of chemical reactions. At any given moment in time, the entropy of the universe must increase, so the storage of "available work" must itself generate enough entropy so that at no point in time does the entropy of the universe decrease. With life, we can see that at each step in converting sunlight into stored chemical energy (such as using sunlight to convert ADP to ATP, which can then be used to generate complex carbohydrates), the entropy of the universe increases. There is then a large increase entropy when the complex carbohydrates are oxidized. In that process of oxidation, a large amount of work can be generated. It can either be used to storage more "work" (such as moving against a gravitational field) or can be used to move to a location of a larger gradient in chemical exergy.

There is a balance between using and storing work (electrical or mechanical). Unfortunately, there is no way to determine what is the optimal balance between storing and using work that will bring the universe to equilibrium at the fastest rate. This is due to the fact that there is no way to predict the fastest route to equilibrium because we can not calculate far enough into the future to determine which route is the fastest to equilibrium. So, how does life determine which route to take?

For basic life forms, they always follow the location of the largest gradient in chemical exergy. For more advanced life forms, there are neural nets that store information about the past to predict the future. The predictions are not correct, but over time, the structures build larger and larger neutral nets to better predict the future. Since there is no way to predict the future, there is no right answer. But it appears that the best answer is to generate the largest rate of return on work invested (and note that this is doesn't always mean the fastest replicating structure.) Over time, though, we can see that there is a general trend towards more self-reference, and larger neural networks to predict the future. This involves greater storage of exergy to unleash even more available work.  But as I said before, there is no right answer. There is no optimization of the fastest route to equilibrium, so bigger, more complex structures may not necessarily be the best route to increase the entropy of the universe. Though, one clear way to increase the entropy of the universe is to deverlop self-replicating solar robots on other planets so as to increase the entropy of the entire universe.

Restated: life is a means of increasing the entropy of the universe and bringing it to a state of equilibrium at a faster rate than without life. Life only occurs when there is a source of exergy (such as gradients in temperature, pressure or chemical potential with respect to the environment) and when the dynamical equations allow for dissipative structures (i.e. attractors) with symmetries at least as complex as A5 (the first nonabelian, simple group.)

1. I have arrived at a similar conclusion: the 'purpose' of life is to increase entropy, and to do it as fast as possible. Activity directed toward that end is what has driven the evolution of increasing complexity, as you suggest. But, as you also note, there is no way of knowing how 'best' to go about it, so we ought to proceed with humility and respect for life.

I am not a physicist, so I have a hard time with some of the terminology in your explication of increasing complexity (for example, I don't know what 'nonabelian' means; I had to look it up, and I'm still not sure). Also, I don't think that the increase in complexity can be adequately described in terms of dynamical equations. But that certainly captures part of it.

Good stuff!

2. JC,
Thanks for the comment.
I'm still trying to work out the theory behind systems far-from-equilibrium. So, there's a reason why you are having difficulty following my logic (because it's still incomplete.)
If it helps any, the definition of non-abelian is that two A*B is not equal to B*A. My guess is that the underlying symmetry operators (such as symmetry in time or space or velocity) must not commute, i.e. they must be viewed as complicated matrices that depend on whether you multiple A*B or B*A. In fact, there might have to be so many symmetry operators that the mathematics is able to be self-referential, as Godel showed for certain algebras.

I've been trying my best to write these posts so that almost anybody can follow the posts, but occasionally I have trouble explaining things without technical words. This is normally when I don't really understand what's going on.
In this case, I'm trying to figure out how the laws of far-from-equilibrium thermodynamics can predict the occurrence of self-replicating structures like bacteria. We can predict the formation of non-self-replicating structures like Bernard Convention Cells, but not self-replicating bacteria.

Hopefully, you can follow most of the other posts...this is one of the more complicated ones because it covers a topic that is still uncertain.

Let me know if you other questions I can answer.

3. I've made some updates to this post.

I removed the section discussing Ilya Prigogine's theories of irreversibility because I'm no longer convinced that irrerversibility is due to 'resonances' between 3 or more particles.

It appears to me now that the "arrow of time" arises from the lack of time symmetry in the weak nuclear force. But regardless of the source of irreversibility, I think that the arguments in this post are still valid.

i.e. the goal of life is to increase the entropy of the universe, and that the best way to increase the entropy of the universe is to obtain the largest rate of return on work invested. (where work is defined as the mechanical or electrochemical work from power plants or animals.)

4. I'm not sure I'd agree that the purpose of life is to increase entropy, but rather that the purpose of life is to increase complexity, and that increasing entropy is an inevitable side effect of this process. I agree with the rest of your reasoning up to this point though.

As an example, you would increase entropy in the universe by a much greater amount by exploding your house than you would by building an extension on it or by doing nothing. However, you would increase the complexity of your house much more by building an extension on it than you would by exploding it or by doing nothing. Clearly, both exploding and building an extension increase entropy more than doing nothing. What is the option you would most enjoy (enjoyment being something we have evolved in order to increase our chances of survival/reproduction).

That said, when people are angry they would rather destroy than do nothing, so perhaps there is something to this increasing entropy as purpose. In Freudian terms, two of our life instincts are Eros and Thanatos. If Eros drives us to reproduce (have sex and babies, to increase the worlds complexity, and entropy as a side product) then Thanatos is what drives us to kill (and start wars, also increasing entropy even more).

1. The thing is Tom, I think Eddie would tell you that you would increase entropy by a lot more building that extension than exploding the house. Consider what you need in each case. Entropy generation would only come from the explosion in the first instance, but think of the ecosystem of work that is required to generate the materials for building the extension--not to mention supporting the labour. Each of those steps contain minutiae of other steps all of which generate entropy, which sum up to far greater than anything the explosion could achieve.

That said, I agree with the sentiment that the purpose of life is to increase complexity. It is just that the second law has chainbound us to the proposition that we can never break even, such that we accelerate the production of total entropy with increasing technological advancement. It is fighting a long defeat. Contrary to Eddie I believe that a debilitating entropic cost is already upon us: climate change. Nathan Lewis (Caltech) gets it: he is fully cognizant of the fact that so-called renewables are hardly economic (negative net return on work invested), but the effects of unprecedented CO2 levels in the atmosphere are too glaring to ignore.

2. Thanks for the comment. You're right about me not wanting to blow up the house. But Tom makes a good point about the Eros vs. Thanatos instincts in humans. The question is: Is the instinct towards death part of a larger instinct towards growing life, or does it disprove my statement that the goal of life is to grow?

If the only choice was between building an extension to a house or blowing it up, I'd chose building the extension because then I could rent it out and possibly see a positive rate of return on work invested, which allows me to spend the energy/work someplace else.
As for your other comments, I see no evidence that the goal of life is to increase the complexity of the life. Bacteria are still doing quite well, and they aren't very complex.
Instead, it appears that the goal is for life to grow. If it turns out that complex life forms can out grow primitive life forms, well so be it, but the goal is to grow first and foremost.

As for you worries about climate change, I suggest that you read a paper by Tol (2009) in which he summarizes the 14 studies of the economic impact of global warming. The data suggests initial benefits to temperature increases until about 2 deg C, and then only slight negative effects on GDP after ~2.5 deg C.
We still need to collect a lot data on the potential economic and environmental impact of increased CO2 levels, but the initial research suggests that the effect will be slightly positive. While there will certainly be many people affected by climate change, the question is: what should be the price of CO2 in order to grow life the fastest?
Everything I've seen leads me to believe that the price of CO2 that grows life the fastest is a small value (~\$10 to \$30/ton CO2). If it's too large, then the negative effect of the price of CO2 emissions on the economy will outweigh any long-term benefits of the reduced CO2 emissions.

Thanks again for the comment. And let me know why you think that the purpose of life is to increase complexity. Do you have any evidence to support this?

5. Hey,
I agree with your principles here they make sense. Except I think life is actually the lowest entropy in the universe. When you talk about increasing the order and pent up entropy and reducing the randomness and free energy of the universe you're discussing negative entropy. Decreased entropy is more order and Increased entropy is more randomness. Life by definition is a set of highly regulated...self perpetuated intelligent and relatively non random processes. The process of metabolism is to create more and more complicated structured molecules using free energy so life mist be negative entropy trying to balance all the chaotic positive entropy of the universe as a whole.