I'm shocked that we still rely on landfills for 55% of the roughly 4.4 lbs of garbage we generate on average here in the US. Roughly 33% is recycled and only 12% is combusted. Of the 12% combusted, only some of the heat released in the combusted is captured at the 87 waste-to-energy plants in the US. Only 2.7 GigaWatts of electricity is generated at these power-plants. And this is only 0.4% of the total average power generation rate in the US.

So what's keeping us back from generating more electricity from municipal solid wastes?

We need electricity, and we will be needing even more of it the future (to drive cars, to power robots, and to stream HD everything.)

There is understandably some concern with incinerating trash. Whenever you have partial combustion of hydrocarbons with chlorine species present. The chlorine comes from PVC wastes, such as some clothing, pipes, and portable electronics.

When the waste is combusted at too low of a temperature, there is a chance of forming dioxins. Dioxins can cause significant harm to the body, effecting both the physical appearance and neurological system. Dioxons can bio-accumulate, so unfortunately we possibly can injest dioxins via the food we eat.

So, how can we generate electricity from waste without generating dioxins (or other forms of air pollution, such as particulates, heavy metal vapors, NOx, and SOx)

First, we could combust the waste and then inject the gases underground, as is proposed for coal power plants. Or we could figure out how to remove all of the pollution from the gas stream. Luckily, most of the pollutant gases are acidic, and so they can be captured by flowing the flue gas through a basic solution, such as a mixture of water and limestone/sodium hydroxide.

There are also ways of minimizing the generation of certain of the pollutants. For example, if the waste is combusted in an environment of pure oxygen (rather than air), then the production of NOx can be eliminated. Also, if the combustion occurs in a pure oxygen stream, then it is easier to obtain temperatures that are high enough such that dioxins will not form. At temperatures above 1200 C and in the presence of significant amounts of oxygen, the production of dioxin is negligible.

Another option is to add limestone (or sodium hydroxide) directly into the combustion environment, in which case the calcium or or sodium can capture the chlorine, keeping it from entering the flue gas stream in the first place.

So, why is waste incineration not taking off in the US?

Right now, I think that there is a fear of building power plants that could cause any environmental damage. It's not completely a question of economics.

For example, NYC is paying on the order of $90 per ton of waste generated in the big apple. At this price, and at an electricity price over $0.15/kWhr, any of the processes described above make sense economically, even sequestering the flue gas miles underground.

We need to get over our fear of everything just because we have a history of environmental damage in the past. We have to compare the possibility of environmental damage due to incineration with the very real environmental damage due to landfilling, such as contamination of groundwater and/or aquifers by leakage or the harbouring of diseases.

We need to overcome this fear really quickly because there are some major advantages to incinerating waste, such recovering precious metals leftover in the ash. The price of precious metals is increasing, and will continue to increase in the future as demand increases and supply decreases (especially if China decreases the amount of precious metals exports.)

So, I'll continue more in another blog to go other many of the different routes, and the estimates of the economic viability of the processes.

## Saturday, October 30, 2010

## Sunday, October 24, 2010

### On Recycling

I've been a huge fan of recycling 'garbage' since I was a kid, making money recycling newspapers and aluminum cans. I still recycle cans, bottles, and newspapers today, even though I don't get paid any longer.

Where I currently live doesn't even have a collection site, so I have to take them to the local university that recycles.

From an traditional economic point of view, what I'm doing right now can not be justified, but it goes along with my last blog regarding humans as "altruistic punishers." We don't like people who are gaming the system.

Many of us hate the idea of throwing away items that could be recycled because we think in terms of cycles (one person's waste is another person's gold.)

For living creatures, one creature's decaying carcass is another's food source.

But on the other hand, the goal is to build wealth (so as to increase the entropy of the universe, i.e. bring the universe to equilibrium at a faster rate.)

If recycling a bottle consumes more exergy (wealth) and creates more pollution that making the bottle from scratch, then there is a good reason not to recycle the bottle.

We have to use our heads and really determine when it makes sense to recycle and when it doesn't make sense.

For me, I'm more apt to recycle, if solely for the fact that while I'm recycling I'm thinking about ways to build recycling power plants (for converting plastic bottles, and other non-easily recyclable products) into electricity in ways that don't do more damage to the environment that the landfill they would go into.

Plastics contain chlorine molecules. In partial oxidizing environments, chlorine-containing carcinogens can be formed. We should be looking into ways of combusting or gasifying municipal solid waste with basic components (such as sodium hydroxide) that can grab the chlorine molecules and keep them from getting into the gas phase. I think that companies like Wheelabrator use limestone as a base to react with the acidic chlorine species. (Though, I'm not positive about this.)

Whether the power plant using combustion or partial oxidation (i.e. gasification), I'm in favor of expanding the use of waste-to-electricity in the US as long as there is positive Return on Investment.

Where I currently live doesn't even have a collection site, so I have to take them to the local university that recycles.

From an traditional economic point of view, what I'm doing right now can not be justified, but it goes along with my last blog regarding humans as "altruistic punishers." We don't like people who are gaming the system.

Many of us hate the idea of throwing away items that could be recycled because we think in terms of cycles (one person's waste is another person's gold.)

For living creatures, one creature's decaying carcass is another's food source.

But on the other hand, the goal is to build wealth (so as to increase the entropy of the universe, i.e. bring the universe to equilibrium at a faster rate.)

If recycling a bottle consumes more exergy (wealth) and creates more pollution that making the bottle from scratch, then there is a good reason not to recycle the bottle.

We have to use our heads and really determine when it makes sense to recycle and when it doesn't make sense.

For me, I'm more apt to recycle, if solely for the fact that while I'm recycling I'm thinking about ways to build recycling power plants (for converting plastic bottles, and other non-easily recyclable products) into electricity in ways that don't do more damage to the environment that the landfill they would go into.

Plastics contain chlorine molecules. In partial oxidizing environments, chlorine-containing carcinogens can be formed. We should be looking into ways of combusting or gasifying municipal solid waste with basic components (such as sodium hydroxide) that can grab the chlorine molecules and keep them from getting into the gas phase. I think that companies like Wheelabrator use limestone as a base to react with the acidic chlorine species. (Though, I'm not positive about this.)

Whether the power plant using combustion or partial oxidation (i.e. gasification), I'm in favor of expanding the use of waste-to-electricity in the US as long as there is positive Return on Investment.

### The Origin of Wealth

I've been reading the book "The Origin of Wealth" by Eric D. Beinhocker.

In general, I find it to be a great read.

I'm really excited that there are economists out there who are trying to actually understand how humans build economies. The author makes a strong case that dropping assumptions of perfect rationality is a must in any economic theory.

I also like see that economists are now using computer games (simulations) to predict general economic trends. While the computer games must eventually be replaced with mathematical formulas, I think that the "Sugarland" simulations do a great job of helping us see the world anew, and without that "seeing the world anew" we don't know where to start to be able to come up with equations that closely approximate human behavior.

I love his chapter on behavioral economics, in which he reminds us that Adam Smith wrote "A Theory of Moral Sentiments" before "On the Wealth of Nations" and that Adam Smith did realize the complexity of human behaviors. We aren't just selfish. We are also social creatures. We sometimes are "altruistic punishers", i.e. we will go out of our way to punish those who we think are gaming the system or free-riding, even if it's not in our economic self-interest.

I also love the chapter on game theory. There's a graph on page 232 that shows which "game theory strategy" dominates versus time if the "game theory strategy" can evolve.

The strategies start simple (such as "always trust the opponent" or "always distrust the opponent"), but they evolve to higher levels (such as "start off by trusting, but if it opponent screws you, then screw them back.)

Eventually, they evolve either further to "start of by trusting, but if it opponent screws you, then screw them back, but then watch them for 'N' moves to see if they go back to being nice."

This is how the simulations worked out, there was no design for this to happen, it just happened! If the equations allow for differentiation, selection, & amplification. I think that we should learn from this example, but we should also remember that there is no optimal solution to the game theory problem. Ultimately, there's no way to know what is the optimal strategy because the rules of the game are continuously changing; the Earth is not stagnant. There is no right political philosophy. One philosophy may be better (on average) during certain times, but others may be better (on average) during other times throughout history. But since there's no way to test all political philosophies at the same time, there's no way to argue that one philosophy is better than another. All we can say is that right now, a particular philosophy has more adherents that another philosophy.

I think that Eric Beinhocker gets most things right, but he's a little off with his idea of "Fit Order." He states on pg 316, "All wealth is created by thermodynamically irreversible, entropy-lowering processes. The act of creating wealth is an act of creating order, but not all order is wealth creating."

He seems to miss the fact that the point of life is to increase the entropy of the universe. The meaning of "order" is confusing, and therefore I personally try to stay away from it. However, entropy is a well-defined term (both for systems in equilibrium and out of equilibrium.)

The structure we see due to life is due to the structure & symmetry inside of the equations of nonlinear, non-equilibrium thermodynamics.

Wealth, as I understand it, is related to the capability to do work (measured in kJ) and it is related to the fact that life has the ability to store available work (such as chemical exergy) to overcome activation barriers that would take too long to overcome without the stored work, and in the end, the entropy of the universe increases faster than it would have without the life structure. The structure (or fit order as Eric calls it) is the means to the end, not the end in of itself.

When you defined wealth in terms of exergy, you can avoid the problems in Eric Beinhocker's definition of wealth, and you can avoid this idea of fit order (neither of these terms are measurable.)

So, in conclusion, I'm a big fan of the book, but I have a problem with his definition of wealth as "fit order."

In general, I find it to be a great read.

I'm really excited that there are economists out there who are trying to actually understand how humans build economies. The author makes a strong case that dropping assumptions of perfect rationality is a must in any economic theory.

I also like see that economists are now using computer games (simulations) to predict general economic trends. While the computer games must eventually be replaced with mathematical formulas, I think that the "Sugarland" simulations do a great job of helping us see the world anew, and without that "seeing the world anew" we don't know where to start to be able to come up with equations that closely approximate human behavior.

I love his chapter on behavioral economics, in which he reminds us that Adam Smith wrote "A Theory of Moral Sentiments" before "On the Wealth of Nations" and that Adam Smith did realize the complexity of human behaviors. We aren't just selfish. We are also social creatures. We sometimes are "altruistic punishers", i.e. we will go out of our way to punish those who we think are gaming the system or free-riding, even if it's not in our economic self-interest.

I also love the chapter on game theory. There's a graph on page 232 that shows which "game theory strategy" dominates versus time if the "game theory strategy" can evolve.

The strategies start simple (such as "always trust the opponent" or "always distrust the opponent"), but they evolve to higher levels (such as "start off by trusting, but if it opponent screws you, then screw them back.)

Eventually, they evolve either further to "start of by trusting, but if it opponent screws you, then screw them back, but then watch them for 'N' moves to see if they go back to being nice."

This is how the simulations worked out, there was no design for this to happen, it just happened! If the equations allow for differentiation, selection, & amplification. I think that we should learn from this example, but we should also remember that there is no optimal solution to the game theory problem. Ultimately, there's no way to know what is the optimal strategy because the rules of the game are continuously changing; the Earth is not stagnant. There is no right political philosophy. One philosophy may be better (on average) during certain times, but others may be better (on average) during other times throughout history. But since there's no way to test all political philosophies at the same time, there's no way to argue that one philosophy is better than another. All we can say is that right now, a particular philosophy has more adherents that another philosophy.

I think that Eric Beinhocker gets most things right, but he's a little off with his idea of "Fit Order." He states on pg 316, "All wealth is created by thermodynamically irreversible, entropy-lowering processes. The act of creating wealth is an act of creating order, but not all order is wealth creating."

He seems to miss the fact that the point of life is to increase the entropy of the universe. The meaning of "order" is confusing, and therefore I personally try to stay away from it. However, entropy is a well-defined term (both for systems in equilibrium and out of equilibrium.)

The structure we see due to life is due to the structure & symmetry inside of the equations of nonlinear, non-equilibrium thermodynamics.

Wealth, as I understand it, is related to the capability to do work (measured in kJ) and it is related to the fact that life has the ability to store available work (such as chemical exergy) to overcome activation barriers that would take too long to overcome without the stored work, and in the end, the entropy of the universe increases faster than it would have without the life structure. The structure (or fit order as Eric calls it) is the means to the end, not the end in of itself.

When you defined wealth in terms of exergy, you can avoid the problems in Eric Beinhocker's definition of wealth, and you can avoid this idea of fit order (neither of these terms are measurable.)

So, in conclusion, I'm a big fan of the book, but I have a problem with his definition of wealth as "fit order."

## Monday, October 18, 2010

### Fermion vs. Boson & Irreversibility

Ever wonder why superfluids can move without friction?

Ever wonder why electrons can move in superconductors without resistance?

How can bosons (like superfluid helium or electron pairs) seemingly avoid generating an increase in entropy?

I'm not saying that there's a violation of the 2nd Law here (it's not like entropy is decreasing.) It's more like it's staying flat rather than increasing.

Does non-equilibrium thermodynamics not apply to Bose-Eisenstein condensates? And if so, does this somehow imply that there would be no increase in entropy if all the particles in the universe were B-E condensate?

Irreversibility clearly happens in Fermi-Dirac "condensate" such as metals and in Boltzmann gases, so I'm not sure why B-E condensate is special?

Is it just that irreversible process happens much slower? Or is it that irreversible processes just don't happen at all?

Clearly, I don't understand something. So, if you have any answers to the question ( how can B-E condensate move without generating an increase in entropy?), please let me know in the comment's space.

Thanks

Eddie

(post note: check out the following post where I argue that the difference between bosons and fermions and there ability to generate entropy may be related to the fact that the weak nuclear force is not time symmetry and that it couples to left-handed particles (or right handed anti-particles) only. This suggestions that pairs of electrons that form a pair might not interaction via the nuclear force, and hence might not generate entropy.)

Ever wonder why electrons can move in superconductors without resistance?

How can bosons (like superfluid helium or electron pairs) seemingly avoid generating an increase in entropy?

I'm not saying that there's a violation of the 2nd Law here (it's not like entropy is decreasing.) It's more like it's staying flat rather than increasing.

Does non-equilibrium thermodynamics not apply to Bose-Eisenstein condensates? And if so, does this somehow imply that there would be no increase in entropy if all the particles in the universe were B-E condensate?

Irreversibility clearly happens in Fermi-Dirac "condensate" such as metals and in Boltzmann gases, so I'm not sure why B-E condensate is special?

Is it just that irreversible process happens much slower? Or is it that irreversible processes just don't happen at all?

Clearly, I don't understand something. So, if you have any answers to the question ( how can B-E condensate move without generating an increase in entropy?), please let me know in the comment's space.

Thanks

Eddie

(post note: check out the following post where I argue that the difference between bosons and fermions and there ability to generate entropy may be related to the fact that the weak nuclear force is not time symmetry and that it couples to left-handed particles (or right handed anti-particles) only. This suggestions that pairs of electrons that form a pair might not interaction via the nuclear force, and hence might not generate entropy.)

### Life inside the Sun, cont.

I've thought a little bit more about my blog on "Life inside the Sun."

While the inside of the Sun is far-from equilibrium, I'm not so certain that this is enough for the possibility of life.

As mentioned in the blog "The meaning of life...", I now believe that life involves certain symmetries of nonlinear differential equations that are at least as complicated as the group A5.

The cells in our bodies rely on electrochemical reactions to convert sugars into work. This means that there is an actual gradient in the chemical potential of species like protons (or other ions that can be transported through membranes.)

Where does this physical gradient (with respect to a space dimension like x) come from?

While there's certainly a gradient in the radial direction, how could a life form convert the gradient in "nuclear potential" into work? How could it store work for later use?

I mentioned that there are catalysts inside the Sun (such as carbon), but I'm not sure that catalysts and a non-equilibrium source of exergy are enough for life to form. I think that it needs more than that, and that something is symmetries that can replicate.

I think that the key to understanding life will be to understand how symmetries of differential equations can replicate. I'm not totally sure what it even means, but as I search through the literature I will continue to write about what I find.

While the inside of the Sun is far-from equilibrium, I'm not so certain that this is enough for the possibility of life.

As mentioned in the blog "The meaning of life...", I now believe that life involves certain symmetries of nonlinear differential equations that are at least as complicated as the group A5.

The cells in our bodies rely on electrochemical reactions to convert sugars into work. This means that there is an actual gradient in the chemical potential of species like protons (or other ions that can be transported through membranes.)

Where does this physical gradient (with respect to a space dimension like x) come from?

While there's certainly a gradient in the radial direction, how could a life form convert the gradient in "nuclear potential" into work? How could it store work for later use?

I mentioned that there are catalysts inside the Sun (such as carbon), but I'm not sure that catalysts and a non-equilibrium source of exergy are enough for life to form. I think that it needs more than that, and that something is symmetries that can replicate.

I think that the key to understanding life will be to understand how symmetries of differential equations can replicate. I'm not totally sure what it even means, but as I search through the literature I will continue to write about what I find.

## Sunday, October 17, 2010

### Disproving a Maximum or Minimum Production Rate of Entropy

I recently read a paper from the year 1965, and in the paper, the authors proved that there is no general function way to maximize or minimize the production rate of entropy. (Gage et al. 1965 "The Non-Existence of a General Thermokinetic Variational Principle.")

I think that this is an important statement.

Around the same time in the 1960s, there were "proofs" for both maximization and minimization of the entropy production rate, dS(irr)/dt. While either of these case can be true near equilibrium given certain constraints, these "principles" are not true in general.

So, I just want to follow up on the previous blog by stating that, while the entropy of the universe is increasing, it is not following the "fast possible route." There is no way to predict the future, so how can we ever know that a local increase of the entropy rate could ultimately cause a large slow down in the entropy rate globally.

What if a large explosion was set off locally? (this would cause a rapid increase in the entropy production rate.) However, if this explosion were due to a nuclear weapon, then it could cause a global problem for all life, which would cause the entropy production rate to decrease.

And here's the ultimate problem with "principles of max/min entropy production rate" : the principle is not valid far-from-equilibrium under non-steady-state conditions, but that's exactly where we live. We live on planet that is far-from-equilibrium (if equilibrium is taken to be the ~3 degrees Kelvin vacuum that is most of space, and could ultimately be all of space.)

So, in conclusion, be careful if you ever run into a paper that proves that the entropy of the universe is increasing at its maximum (or minimum) possible route.

All we can say is that the entropy production rate on the Earth is faster than if life were not here. We can not prove that the actions we are taking this very moment are in fact causing the fastest possible rate of entropy production.

I think that this is an important statement.

Around the same time in the 1960s, there were "proofs" for both maximization and minimization of the entropy production rate, dS(irr)/dt. While either of these case can be true near equilibrium given certain constraints, these "principles" are not true in general.

So, I just want to follow up on the previous blog by stating that, while the entropy of the universe is increasing, it is not following the "fast possible route." There is no way to predict the future, so how can we ever know that a local increase of the entropy rate could ultimately cause a large slow down in the entropy rate globally.

What if a large explosion was set off locally? (this would cause a rapid increase in the entropy production rate.) However, if this explosion were due to a nuclear weapon, then it could cause a global problem for all life, which would cause the entropy production rate to decrease.

And here's the ultimate problem with "principles of max/min entropy production rate" : the principle is not valid far-from-equilibrium under non-steady-state conditions, but that's exactly where we live. We live on planet that is far-from-equilibrium (if equilibrium is taken to be the ~3 degrees Kelvin vacuum that is most of space, and could ultimately be all of space.)

So, in conclusion, be careful if you ever run into a paper that proves that the entropy of the universe is increasing at its maximum (or minimum) possible route.

All we can say is that the entropy production rate on the Earth is faster than if life were not here. We can not prove that the actions we are taking this very moment are in fact causing the fastest possible rate of entropy production.

## Monday, October 11, 2010

### The meaning of life...Increasing the entropy of the universe

Okay, here's my train of logic for the meaning of life. It's quite long and rambles at times, but I think that the end result is valid from the starting assumptions. I've broken it down into "Conclusions", "Assumptions", and "Line of Reasoning."

Let me know what you think.

Conclusion: Life is a means of increasing the entropy of the universe. Life is a result of the fact that the equations of dynamics are non-linear, allow for self-replicating structures, and that the starting conditions of the universe are non-equilibrium. The goal of life is to bring the universe to equilibrium at a faster rate than if the equations of dynamics did not allow for life.

Therefore, we as living beings should be trying to increase the entropy of the universe. This means converting as much exergy (such as sunlight) into low grade energy as possible. There are other gradients of exergy that we can take advantage of as well (such as gradients in thermal energy, chemical potential and nuclear potential.) The means to do so are storing "information" (i.e. available electrical/mechanical work) so as to build devices that generate even more entropy. As biologist Stuart Kauffman stated by in "Reinventing the Sacred":

There is a balance between using and storing available work (electrical or mechanical). Unfortunately, there is no way to determine what is the optimal balance between storing and using work that will bring the universe to equilibrium at the fastest rate. (i.e. there is no way to predict the fastest route to equilibrium because we can not calculate far enough into the future to determine which route is the fastest to equilibrium.) So, how does life determine which route to take?

It uses neural nets (with some information of past attempts) to estimate which route will bring the system to equilibrium the fastest. But there's no guarantee that it's the best route. Just as there's no guarantee that the answer to the traveling salesman problem is the optimal solution when using neural nets.

Restated: Life is a means of increasing the entropy of the universe and bringing the universe to a state of equilibrium at a faster rate than without life.

Assumptions: 1) Entropy increases due to collisions between particles because the forces of nature are not time reversible. 2) The universe started in a state of non-equilibrium. 3) The future can not be predicted because of the extreme non-linearity of the governing equations. 4) The dynamic equations of systems are highly non-linear and allow for self-replicating structures 5) The self-replicating attractors found in the dynamic equations has a two-fold effect: a) inability to predict the future, and b) ability to store both work and "information" (This self-replicating nature only occurs for systems far-from-equilibrium.)

Line of Reasoning:

Entropy is the number of microstates available to a given macrostate. Entropy defined this way is only valid for large numbers of particles because as N becomes larger (greater than 100,000), then the macrostate with the most microstates ends up being essentially the only macrostate with an probability of occurring. Another way of stating this is asking the question: what is the N-volume of the last dx of an N-D sphere. As N becomes greater than 100,000, then almost all of the volume of the N-D sphere is located at the edge of the N-D sphere.

For the universe, the macrostate is defined by the total energy and total momentum, which are conserved over time.

Assuming that this universe started with a Big Bang (i.e. all of the energy localized in one location), then this represents a state of low entropy. Even though the temperature would have been very high...and I mean almost unimaginably high, the energy would have been confined to a small region of space. There would not have been many microstates available compared with the microstates available today.

There existed a large gradient in energy at the start of the universe, between the location of energy and the rest of the open space in the universe. Diffusion of energy from a region of high energy to low energy would have started immediately.

It can be shown that entropy is defined for both systems in equilibrium and for systems not-in-equilibrium. (See pg 71 eq 6.4 of Grandy's "Entropy and the Time Evolution of Macrosopic Systems.) Since entropy is defined as the number of microstates for the given macrostate with the most microstates, it is a unitless variable. (Note that you can add dimension to entropy by multiplying by k or R. Its unitless definition is convenient because it means that it's relativistically invariant.)

The universe will always be in the given macrostate with the highest entropy because the number of microstates in the given macrostate is so large compared with neighboring macrostates.

The question is then: how does the universe evolve with time? How does it evolve into macrostates with even larger numbers of microstates? When we look around us, we see that there is always an increase in entropy, but most of us have a hard time understanding why.

At the beginning of the universe, the energy was confined to a small region and the probability of finding a "particle" in a certain region or a "field" with a given quanta of energy was larger than it is today. The probability of guessing the actual microstate of the universe near the Big Bang is a lot larger than the probability of guessing the microstate of the universe right now. This loss of information is a loss in the ability to predict the given microstate of the universe. If the number of microstates increases (i.e. entropy increases), then our ability to guess the actual microstate decreases. If we start a confined system in a given microstate, over time we lose information about the actual microstate.

For example, if we start a system with 1,000 particles on left side of a box and then remove the object constraining the particles to the left side of the box. We lose information about the actual microstate as the particles collide. Over time, our ability to predict the given microstate decreases, but at the same time, the symmetry of the system is increasing. Over time, the system will be in the macrostate with the most number microstates. This turns out to be the case in which there is left/right symmetry between the half-way point in the box.

The symmetry of the universe has increased, and then is part of a general trend that "the symmetry of the effect is greater than the symmetry of the cause." (i.e. the Rosen-Curie Principle) (Note that this principle is not violated by nonlinear phenomena, such as Rayleigh-Benard convection cells...it's the total symmetry of the universe that increases because of the increased heat conduction rate.)

The Rosen-Curie Principle is another way of stating the 2nd Law of Thermodynamics, i.e. the number of microstates of the macrostate with the most microstates is increasing with time. [Note: that this means that time can not be reversed. And since there is no symmetry with respect to time reversal, there is no conservation of entropy.]

Note: The idea of increasing symmetry is almost exactly opposite of what's taught to undergraduates in freshman level physics. They are taught that an increas in entropy is equal to an increase in "disorder." This is a incorrect statement and, worse, it's unquantifiable. How do you quantify "disorder" ? You can't. Instead, you can quantify "number of microstates for the macrostate with the larger number of microstates." It's dimensionless and it's relativistically-invariant. You can also quantify the number of symmetries.

So, now that we've seen that entropy increases, we can see where the universe is heading towards. It's heading towards a state of complete symmetry. The final resting state of the universe would be a homogenous state of constant temperature, pressure, chemical potential and nuclear potential. Depending on the size of the universe and questions regarding inflation and proton stability, this could be a state of dispersed iron (which is the state of lowest nuclear Gibbs free energy.) The actual value of the pressure, temperature, and chemical potential depend greatly on whether the universe will continue to just expand into the vastness of space.

If it continues to expand, there may never be a final state of equilibrium, but what we can say is that it will be more symmetric than it is today.

So, going back to the question of life, we have to ask: how does life fit into the picture? Where did life come from and what's the purpose?

Here is my round-about answer to that question. I'm going to address this question by going through the different levels of complexity as one moves from systems in equilibrium to systems far-from-equilibrium. My understanding is that systems far-from-equilibrium are trying to reach equilibrium at the fast possible rate that is allowed by the given constraints in the system.

I see the following levels of complexity:

1) Equilibrium (complete homogeneity) There is symmetry in time and symmetry in space (i.e. the pressure, temperature, electrochemical potential, etc. are constants and not varying with space or time)

2) Linear Non-equilibrium (Gradient in temperature, pressure, electrochemical potential, etc.) But the gradient is small so that the non-linear equation become linear. This is best seen with Ohm's law, in which the directed velocity of electrons due to the gradient of electrical potential is small compared to their thermal speed.

3) Non-linear Non-equilibrium of degree one: The non-linear equations allow for structures to appear that are time independent or time dependent structures that are space-independent. (Such as time independent Ralyeigh-Benard convection cells) This requires a non-linear term in the dynamical equations of the system.

4) Non-linear Non-equilibrium of degree two: The non-linear equations allow for structures to appear that are time dependent. (Such as time dependent Ralyeigh-Benard convection cells) This requires that the system be even further from equilibrium and for non-linear terms.

5) Non-linear Non-equilibrium of degree three/four: Chaotic motion of the system (This also requires a gradient in a potential and it requires that there be a term of cubic power in the equations of motion.) (An example would be a Ralyeigh-Benard cell driven with an extreme temperature gradient such that the cells fluctuate chaotically, i.e. a broad spectrum of frequencies) (We'll use degree three to represent structures with one positive eigenvalue and degree four to represent systems with multiple positive eigenvalues.)

6) Non-linear Non-equilibrium of degree five: Self-referential equations of motion for the system. For systems far-from-equilibrium, there is the possibility that equations can refer back to themselves. These are structures that are formed that can replicate. These structures require a source of exergy (such as a gradient in pressure, chemical potential, etc...) to replicate, but they don't immediately disappear when the source of exergy is turned off. This is due to the fact that exergy is stored within the structure itself. (By exergy, I mean the available work in moving a non-equilibrium system to equilibrium with its environment.) If there is no new source of exergy, then the structure will eventually stop moving and will eventually disappear, like a Ralyeigh-Benard cell disappears after the temperature gradient is removed. At equilibrium, all such structures will disappear.

These structures are capable of storing "information" (i.e. storing gradients in exergy that can be used to generate work), which can be used to generate more "information." "Information return on information investment" Though, the final goal is not more information...the final goal is equilibrium. The "information" is used to speed up the process of reaching equilibrium.

Summary:Equilibrium: no eigenvalues (i.e. no stable structures)

Linear-Non-Equilibrium: negative eigenvalues (i.e. no stable structures, such as convection cells)

Non-linear Non-Equilibrium of order one and two: complex negative eigenvalues (convection cells can form)

Non-linear Non-Equilibrium of order three: one positive eigenvalue (strange attractor has a combination of positive and negative eigenvalues.) (time-varying structures can form)

Non-linear Non-Equilibrium of order four: at least two positive eigenvalues (complex, time-varying structures can form)

Non-linear Non-Equilibrium of order five: the structure is not solvable, i.e. the group describing the eigenvalues is at least as complicated as the group A5 (which is the first nonabelian simple group.)(i.e. Living structures can appear when you are this far way from equilibrium.) The structure is more complicated than the "structure" of a Rayleigh-Benard cell because it has a level of self-reference that allows for replication. The group A5 is the most basic of the building block for order higher groups. One could say that life is about the building of nonabelian, simple structures that survive off a gradient in exergy in order to increase the rate at which the universe reaches equilibrium. (at which point, the structures disappear.)

When the equations of motion allow for "attractors" (i.e. dissipative structures) with symmetries that form a nonabealian, simple group, the structure is capable of replicating. For some higher level of symmetry, there must be the ability for the structure to store information, and it's unclear to me right now what level of group theory is required to allow for storage of exergy. What is clear is that some structures can store "available work" for later use.

The stored "available chemical or mechanical work" is used to overcome the activation energy of chemical reactions. At any given moment in time, the entropy of the universe must increase, so the storage of "available work" must itself generate enough entropy so that at no point in time does the entropy of the universe decrease. With life, we can see that at each step in converting sunlight into stored chemical energy (such as using sunlight to convert ADP to ATP, which can then be used to generate complex carbohydrates), the entropy of the universe increases. There is then a large increase entropy when the complex carbohydrates are oxidized. In that process of oxidation, a large amount of work can be generated. It can either be used to storage more "work" (such as moving against a gravitational field) or can be used to move to a location of a larger gradient in chemical exergy.

There is a balance between using and storing work (electrical or mechanical). Unfortunately, there is no way to determine what is the optimal balance between storing and using work that will bring the universe to equilibrium at the fastest rate. This is due to the fact that there is no way to predict the fastest route to equilibrium because we can not calculate far enough into the future to determine which route is the fastest to equilibrium. So, how does life determine which route to take?

For basic life forms, they always follow the location of the largest gradient in chemical exergy. For more advanced life forms, there are neural nets that store information about the past to predict the future. The predictions are not correct, but over time, the structures build larger and larger neutral nets to better predict the future. Since there is no way to predict the future, there is no right answer. But it appears that the best answer is to generate the largest rate of return on work invested (and note that this is doesn't always mean the fastest replicating structure.) Over time, though, we can see that there is a general trend towards more self-reference, and larger neural networks to predict the future. This involves greater storage of exergy to unleash even more available work. But as I said before, there is no right answer. There is no optimization of the fastest route to equilibrium, so bigger, more complex structures may not necessarily be the best route to increase the entropy of the universe. Though, one clear way to increase the entropy of the universe is to deverlop self-replicating solar robots on other planets so as to increase the entropy of the entire universe.

Restated: life is a means of increasing the entropy of the universe and bringing it to a state of equilibrium at a faster rate than without life. Life only occurs when there is a source of exergy (such as gradients in temperature, pressure or chemical potential with respect to the environment) and when the dynamical equations allow for dissipative structures (i.e. attractors) with symmetries at least as complex as A5 (the first nonabelian, simple group.)

Let me know what you think.

Conclusion: Life is a means of increasing the entropy of the universe. Life is a result of the fact that the equations of dynamics are non-linear, allow for self-replicating structures, and that the starting conditions of the universe are non-equilibrium. The goal of life is to bring the universe to equilibrium at a faster rate than if the equations of dynamics did not allow for life.

Therefore, we as living beings should be trying to increase the entropy of the universe. This means converting as much exergy (such as sunlight) into low grade energy as possible. There are other gradients of exergy that we can take advantage of as well (such as gradients in thermal energy, chemical potential and nuclear potential.) The means to do so are storing "information" (i.e. available electrical/mechanical work) so as to build devices that generate even more entropy. As biologist Stuart Kauffman stated by in "Reinventing the Sacred":

*Cells do some combination of mechanical, chemical, electrochemical and other work and work cycles in a web of propagating organization of processes that often link spontaneous and non-spontaneous processes…Given boundary conditions, physicists state the initial conditions, particles , and forces, and solve the equations for the subsequent dynamics—here , the motion of the piston. But in the real universe we can ask, “Where do the constraints themselves come from?” It takes real work to construct the cylinder and the piston, place one inside the other, and then inject the gas…It takes work to constrain the release of energy, which, when released, constitutes work…This is part of what cells do when they propagate organization of process. They have evolved to do work to construct constraints on the release of energy that in turn does further work, including the construction of many things such as microtubules, but also construction of more constraints …Indeed, cells build a richly interwoven web of boundary conditions that further constrains the release of energy so as to build yet more boundary conditions.*There is a balance between using and storing available work (electrical or mechanical). Unfortunately, there is no way to determine what is the optimal balance between storing and using work that will bring the universe to equilibrium at the fastest rate. (i.e. there is no way to predict the fastest route to equilibrium because we can not calculate far enough into the future to determine which route is the fastest to equilibrium.) So, how does life determine which route to take?

It uses neural nets (with some information of past attempts) to estimate which route will bring the system to equilibrium the fastest. But there's no guarantee that it's the best route. Just as there's no guarantee that the answer to the traveling salesman problem is the optimal solution when using neural nets.

Restated: Life is a means of increasing the entropy of the universe and bringing the universe to a state of equilibrium at a faster rate than without life.

Assumptions: 1) Entropy increases due to collisions between particles because the forces of nature are not time reversible. 2) The universe started in a state of non-equilibrium. 3) The future can not be predicted because of the extreme non-linearity of the governing equations. 4) The dynamic equations of systems are highly non-linear and allow for self-replicating structures 5) The self-replicating attractors found in the dynamic equations has a two-fold effect: a) inability to predict the future, and b) ability to store both work and "information" (This self-replicating nature only occurs for systems far-from-equilibrium.)

Line of Reasoning:

Entropy is the number of microstates available to a given macrostate. Entropy defined this way is only valid for large numbers of particles because as N becomes larger (greater than 100,000), then the macrostate with the most microstates ends up being essentially the only macrostate with an probability of occurring. Another way of stating this is asking the question: what is the N-volume of the last dx of an N-D sphere. As N becomes greater than 100,000, then almost all of the volume of the N-D sphere is located at the edge of the N-D sphere.

For the universe, the macrostate is defined by the total energy and total momentum, which are conserved over time.

Assuming that this universe started with a Big Bang (i.e. all of the energy localized in one location), then this represents a state of low entropy. Even though the temperature would have been very high...and I mean almost unimaginably high, the energy would have been confined to a small region of space. There would not have been many microstates available compared with the microstates available today.

There existed a large gradient in energy at the start of the universe, between the location of energy and the rest of the open space in the universe. Diffusion of energy from a region of high energy to low energy would have started immediately.

It can be shown that entropy is defined for both systems in equilibrium and for systems not-in-equilibrium. (See pg 71 eq 6.4 of Grandy's "Entropy and the Time Evolution of Macrosopic Systems.) Since entropy is defined as the number of microstates for the given macrostate with the most microstates, it is a unitless variable. (Note that you can add dimension to entropy by multiplying by k or R. Its unitless definition is convenient because it means that it's relativistically invariant.)

The universe will always be in the given macrostate with the highest entropy because the number of microstates in the given macrostate is so large compared with neighboring macrostates.

The question is then: how does the universe evolve with time? How does it evolve into macrostates with even larger numbers of microstates? When we look around us, we see that there is always an increase in entropy, but most of us have a hard time understanding why.

At the beginning of the universe, the energy was confined to a small region and the probability of finding a "particle" in a certain region or a "field" with a given quanta of energy was larger than it is today. The probability of guessing the actual microstate of the universe near the Big Bang is a lot larger than the probability of guessing the microstate of the universe right now. This loss of information is a loss in the ability to predict the given microstate of the universe. If the number of microstates increases (i.e. entropy increases), then our ability to guess the actual microstate decreases. If we start a confined system in a given microstate, over time we lose information about the actual microstate.

For example, if we start a system with 1,000 particles on left side of a box and then remove the object constraining the particles to the left side of the box. We lose information about the actual microstate as the particles collide. Over time, our ability to predict the given microstate decreases, but at the same time, the symmetry of the system is increasing. Over time, the system will be in the macrostate with the most number microstates. This turns out to be the case in which there is left/right symmetry between the half-way point in the box.

The symmetry of the universe has increased, and then is part of a general trend that "the symmetry of the effect is greater than the symmetry of the cause." (i.e. the Rosen-Curie Principle) (Note that this principle is not violated by nonlinear phenomena, such as Rayleigh-Benard convection cells...it's the total symmetry of the universe that increases because of the increased heat conduction rate.)

The Rosen-Curie Principle is another way of stating the 2nd Law of Thermodynamics, i.e. the number of microstates of the macrostate with the most microstates is increasing with time. [Note: that this means that time can not be reversed. And since there is no symmetry with respect to time reversal, there is no conservation of entropy.]

Note: The idea of increasing symmetry is almost exactly opposite of what's taught to undergraduates in freshman level physics. They are taught that an increas in entropy is equal to an increase in "disorder." This is a incorrect statement and, worse, it's unquantifiable. How do you quantify "disorder" ? You can't. Instead, you can quantify "number of microstates for the macrostate with the larger number of microstates." It's dimensionless and it's relativistically-invariant. You can also quantify the number of symmetries.

So, now that we've seen that entropy increases, we can see where the universe is heading towards. It's heading towards a state of complete symmetry. The final resting state of the universe would be a homogenous state of constant temperature, pressure, chemical potential and nuclear potential. Depending on the size of the universe and questions regarding inflation and proton stability, this could be a state of dispersed iron (which is the state of lowest nuclear Gibbs free energy.) The actual value of the pressure, temperature, and chemical potential depend greatly on whether the universe will continue to just expand into the vastness of space.

If it continues to expand, there may never be a final state of equilibrium, but what we can say is that it will be more symmetric than it is today.

So, going back to the question of life, we have to ask: how does life fit into the picture? Where did life come from and what's the purpose?

Here is my round-about answer to that question. I'm going to address this question by going through the different levels of complexity as one moves from systems in equilibrium to systems far-from-equilibrium. My understanding is that systems far-from-equilibrium are trying to reach equilibrium at the fast possible rate that is allowed by the given constraints in the system.

I see the following levels of complexity:

1) Equilibrium (complete homogeneity) There is symmetry in time and symmetry in space (i.e. the pressure, temperature, electrochemical potential, etc. are constants and not varying with space or time)

2) Linear Non-equilibrium (Gradient in temperature, pressure, electrochemical potential, etc.) But the gradient is small so that the non-linear equation become linear. This is best seen with Ohm's law, in which the directed velocity of electrons due to the gradient of electrical potential is small compared to their thermal speed.

3) Non-linear Non-equilibrium of degree one: The non-linear equations allow for structures to appear that are time independent or time dependent structures that are space-independent. (Such as time independent Ralyeigh-Benard convection cells) This requires a non-linear term in the dynamical equations of the system.

4) Non-linear Non-equilibrium of degree two: The non-linear equations allow for structures to appear that are time dependent. (Such as time dependent Ralyeigh-Benard convection cells) This requires that the system be even further from equilibrium and for non-linear terms.

5) Non-linear Non-equilibrium of degree three/four: Chaotic motion of the system (This also requires a gradient in a potential and it requires that there be a term of cubic power in the equations of motion.) (An example would be a Ralyeigh-Benard cell driven with an extreme temperature gradient such that the cells fluctuate chaotically, i.e. a broad spectrum of frequencies) (We'll use degree three to represent structures with one positive eigenvalue and degree four to represent systems with multiple positive eigenvalues.)

6) Non-linear Non-equilibrium of degree five: Self-referential equations of motion for the system. For systems far-from-equilibrium, there is the possibility that equations can refer back to themselves. These are structures that are formed that can replicate. These structures require a source of exergy (such as a gradient in pressure, chemical potential, etc...) to replicate, but they don't immediately disappear when the source of exergy is turned off. This is due to the fact that exergy is stored within the structure itself. (By exergy, I mean the available work in moving a non-equilibrium system to equilibrium with its environment.) If there is no new source of exergy, then the structure will eventually stop moving and will eventually disappear, like a Ralyeigh-Benard cell disappears after the temperature gradient is removed. At equilibrium, all such structures will disappear.

These structures are capable of storing "information" (i.e. storing gradients in exergy that can be used to generate work), which can be used to generate more "information." "Information return on information investment" Though, the final goal is not more information...the final goal is equilibrium. The "information" is used to speed up the process of reaching equilibrium.

Summary:Equilibrium: no eigenvalues (i.e. no stable structures)

Linear-Non-Equilibrium: negative eigenvalues (i.e. no stable structures, such as convection cells)

Non-linear Non-Equilibrium of order one and two: complex negative eigenvalues (convection cells can form)

Non-linear Non-Equilibrium of order three: one positive eigenvalue (strange attractor has a combination of positive and negative eigenvalues.) (time-varying structures can form)

Non-linear Non-Equilibrium of order four: at least two positive eigenvalues (complex, time-varying structures can form)

Non-linear Non-Equilibrium of order five: the structure is not solvable, i.e. the group describing the eigenvalues is at least as complicated as the group A5 (which is the first nonabelian simple group.)(i.e. Living structures can appear when you are this far way from equilibrium.) The structure is more complicated than the "structure" of a Rayleigh-Benard cell because it has a level of self-reference that allows for replication. The group A5 is the most basic of the building block for order higher groups. One could say that life is about the building of nonabelian, simple structures that survive off a gradient in exergy in order to increase the rate at which the universe reaches equilibrium. (at which point, the structures disappear.)

When the equations of motion allow for "attractors" (i.e. dissipative structures) with symmetries that form a nonabealian, simple group, the structure is capable of replicating. For some higher level of symmetry, there must be the ability for the structure to store information, and it's unclear to me right now what level of group theory is required to allow for storage of exergy. What is clear is that some structures can store "available work" for later use.

The stored "available chemical or mechanical work" is used to overcome the activation energy of chemical reactions. At any given moment in time, the entropy of the universe must increase, so the storage of "available work" must itself generate enough entropy so that at no point in time does the entropy of the universe decrease. With life, we can see that at each step in converting sunlight into stored chemical energy (such as using sunlight to convert ADP to ATP, which can then be used to generate complex carbohydrates), the entropy of the universe increases. There is then a large increase entropy when the complex carbohydrates are oxidized. In that process of oxidation, a large amount of work can be generated. It can either be used to storage more "work" (such as moving against a gravitational field) or can be used to move to a location of a larger gradient in chemical exergy.

There is a balance between using and storing work (electrical or mechanical). Unfortunately, there is no way to determine what is the optimal balance between storing and using work that will bring the universe to equilibrium at the fastest rate. This is due to the fact that there is no way to predict the fastest route to equilibrium because we can not calculate far enough into the future to determine which route is the fastest to equilibrium. So, how does life determine which route to take?

For basic life forms, they always follow the location of the largest gradient in chemical exergy. For more advanced life forms, there are neural nets that store information about the past to predict the future. The predictions are not correct, but over time, the structures build larger and larger neutral nets to better predict the future. Since there is no way to predict the future, there is no right answer. But it appears that the best answer is to generate the largest rate of return on work invested (and note that this is doesn't always mean the fastest replicating structure.) Over time, though, we can see that there is a general trend towards more self-reference, and larger neural networks to predict the future. This involves greater storage of exergy to unleash even more available work. But as I said before, there is no right answer. There is no optimization of the fastest route to equilibrium, so bigger, more complex structures may not necessarily be the best route to increase the entropy of the universe. Though, one clear way to increase the entropy of the universe is to deverlop self-replicating solar robots on other planets so as to increase the entropy of the entire universe.

Restated: life is a means of increasing the entropy of the universe and bringing it to a state of equilibrium at a faster rate than without life. Life only occurs when there is a source of exergy (such as gradients in temperature, pressure or chemical potential with respect to the environment) and when the dynamical equations allow for dissipative structures (i.e. attractors) with symmetries at least as complex as A5 (the first nonabelian, simple group.)

Subscribe to:
Posts (Atom)