We stand at a precipice of human civilization. The gap between our scientific knowledge and our religions is enormous.

On the one hand, we live in a world of monotheism (dominant in the 'West') and pan-theism (dominant in the 'East').

On the other hand, interspersed throughout the world is a growing body of scientifically literate people who peer into the depths of the unknown, struggling to reconcile the knowledge of physics, chemistry, biology and sociology with the underlying traditions of the culture that they came from.

Their knowledge of physics, chemistry, biology and sociology gives no meaning to their existence, and at times, gives an opposite meaning to their existence than what their society and bodies are telling them to do.

We need to create a new religion that can help us cultivate consciousness during the peak of prime, and help us release that consciousness during the end of its life. The tenets of this religion can not be at odds with the growing body of scientific knowledge, and can not be at odds with new knowledge that we have yet to invent/discover.

The underlying symbols of the old traditions may help in this goal, but some of them may hinder the goal mentioned above (cultivate ego, and then dissolve ego). We will have to pick and choose wisely. But decide we must because we need to address the widening gulf between our religion and our science.

For example, science seems to be telling us the following:

That life is a means of bringing the universe to its final state as soon as possible.

Let me rephrase this a few times so as to clarify.

Life is a means of increasing the entropy of the universe. For example, the production of entropy on Earth is greater than if there had been no life on Earth. But by increasing the entropy of the universe, we are speeding up the eventual fate of the universe, which appears to be a uniform state of equilibrium. At equilibrium, there is no entropy production...there is no life. There is just equilibrium (now, of course, this depends on whether there is enough mass in the universe to cause the universe to contract backwards...but either way, the end fate of the universe appears to be one of equilibrium, and of no life.)

The purpose of life is to bring the universe to equilibrium as fast as possible.

While there are a few religions in which this 'purpose of life' may be compatible, the 'purpose of life' mentioned above is quite incompatible with the Muslim-Jewish-Christian religion of the 'West'. In these religions, there is a heaven and everlasting life for the individual. Science does not give us a 'heaven' and suggests not only that life for the individual must end, but also that all life must eventually end.

How do we reconcile life as a means to heaven and life as a means of bringing life to an end? (i.e life as a means of ending all life and eliminating all gradients across the universe.)

I do not know how to reconcile this problem. My suggestion, however, is not to ignore this problem. It seems like the religion of the future will need to be some combination of Star Wars and Hinduism.

The Star Wars element is required in order for us to understand that our goal to populate the universe with life. We must expand to other planets so that we can create life on other planets. We must explore other planets and learn to co-exist with other lifeforms so that together we can increase the entropy production of the universe.

We need the Hindu element in order to understand that universes, life and our notions of gods (even Brahman itself) come and go just like a lotus flower dies and comes back again each year. At equilibrium, there is no male and female. There is no good and evil once we reach equilibrium because all forms and structures dissolve away. But perhaps, the universe will start over again. Perhaps, there are multiple universes. Perhaps, metaphorically, the sleeping Brahman will awake again on a new lotus flower and open his eyes.

What we need is a new Luke Skywalker in the form of Bill Gates and Chuck Bartowski (from the TV show Chuck.) We need a new Leia Organa Solo in the form of Meg Whitman and The Bride (Uma Thurman in Kill Bill). Our heroes are action figures like Chuck or The Bride. But we also need business heroes: we need heroes that build businesses like Bill Gates and Meg Whitman in our TV shows and movies.

We all need to fight, but the question is what are we fighting for. To defeat the Galactic Empire, to revenge an assassination attempt, to defeat crime organizations and protect the US government???

No, what we are fighting for is the capability of growing life on other planets so that the entropy of the universe increases even faster, so that life expands. So that life evolves even more. So that life becomes more complex, more dynamic, and more self-aware. More feedback, more mirrors back on itself, higher levels of consciousness, and larger societies on more planets with more entropy production.

Business, nations, family are routes to speed up the production of entropy, but there is no ultimate right business, nation or family. There is no one right meme.

There is no perfect solution to the question: how to bring the universe to equilibrium the fastest. Some memes are better at this than others, but there is no way to prove that one meme is better than another.

So how do we choose between differing memes? How do we choose between Christianity and some new religion? (perhaps something like what I described above.) What if this new religion incorporated into it the idea that there is no right religion? Why should we believe something that states that it's not correct?

Ultimately, this new religion must be capable of growing faster than all other religions and perhaps must also be flexible enough to incorporate old religions into it smoothly.

What we need is a reconciling of Luke Skywalker, The Bride, Leopold Bloom, Molly Bloom, Bill Gates, Yoda, Meg Whitman, Jeff Bezos, Leia Organa Solo, and The Dude.

We need to avoid the creation of billions of George Costanza's: fearful of death and the ending of the individual ego, afraid of his parents, incapable of owning a business, never to marry and bring new life into world. George Costanza represents what goes wrong when religion and science are divorced from each other. He lives in a world in which he understands neither science nor religion and can gain nothing from either. He lives in a world in which he can't make decisions on whom to date or what to eat because each action carries with too much significance because he doesn't see the purpose of life.

The purpose of life is to increase the entropy of the universe. The question is: are we capable of living up to that purpose, or do we run and hide from it? Are we afraid to give up this individual ego that has sprouted up from the elements when the time arises? Are we willing to live with the individual ego during the peak of our life in order to take advantage of its capability to solve complex problems? (i.e. are we willing to not become Zen Masters until we get close to ego's end? Are we willing to avoid the pitfalls of the twenty year old 'Zen Master' and the pitfalls of the 70 year old George Costanza?)

## Friday, December 31, 2010

## Sunday, December 12, 2010

### Self-Replicating Solar Robots

I've wanted to write about self-replicating solar robots for awhile.

I've been interested in the idea of sending self-replicating solar robots to the Moon for awhile, but just recently, I read an article on this topic by professor Klaus Lackner. He calls his self-replicating robots, auxons.

While the idea has been around for awhile, it looks like Klaus Lackner and his co-authors (Darryl Butt and Christopher Wendt) have done the best job of thinking through all of the chemical reaction that must occur to derive the materials needed produce self-replicating robots.

A self-replicating robot has to collect enough electricity from sunlight in order to be able to build a duplicate of itself. I've defined the work return on investment as the ratio of the total net work (electrical work in this case) generated over the lifetime of the machine divided by the total exergy (electricity also in this case) to build the machine. A self-replicating solar robot requires a 'work' return on investment greater than one, and for the robot colony to grow the return on investment must be much greater than one. Also important is the rate of return on investment, which is related to the net electricity generated per unit time divided by the upfront electricity consumed in building the solar auxon. (The "net" in the numerator means the gross electricity produced minus reoccurring electricity expenses associated perhaps with labor, maintenance, fuel costs. For the solar auxons, the reoccurring work is the electricity required to move, repair, and clean.) The rate of return is typical given in units of %/yr. You can invert the rate of return to approximately calculate the pay-back time and then double that value to find the time required to double the population of the solar robots.

Lackner and his co-authors found that the solar colony could double in size every few months. Though, their numbers here seem to rather optimistic because the payback time for solar PV panels is on the order of magnitude of 10 years, and that's using PV solar cells chemistry that probably consumes less electricity than the cell chemistry that would have to be used by the self-replicating robots. On the other hand, the doubling rate for blue-green algae is on the order of 20 hours. I have yet to check their numbers, so I am just speculating right now. (I'd like to do a full analysis, perhaps as a class project in a future course I teach.)

I'm particularly interested in the self-replicating solar robots because they seem to be the best way of populating the Moon. (Unlike Mars, it seems unlikely that water-carbon-based lifeforms, such as algae, could survive on the Moon. We could probably populate Mars with the introduction of the strong greenhouse gases [to melt the ice caps] and some algae from the Earth.) If the Moon could be covered with solar robots, we could possibly use the Moon as a staging ground for further exploration of the solar system.

The chemical composition of the Moon (depending on location) is roughly 45% silica (SiO2), 20% alumina (Al2O3) and 10% iron oxide (FeO). The metals in these three materials would make up the main components of the self-replicating robots (Si, Al, & Fe). As Lackner found, one would have to develop innovative chemical processing techniques in order to make the Si, Al and Fe from the oxides.

While it's possible to produce silicon from some electro-chemical reactions, the robots could also use electricity to run a high temperature plasma arc that heats the silicon dioxide to 4000 K, at which point the oxygen is released in the moon's atmosphere. Unfortunately, the oxygen won't stay up there that long because the moon can't hold on to its gases as long as the Earth can. (I calculated that the probability of an oxygen molecules escaping from the Moon's gravity is 424 million times larger than the probability of an oxygen molecules escaping from the Earth's gravity.) Although, it still may be possible to build up a sizable pressure of O2 in the atmosphere once the self-replicating robots start to cover the entire Moon.

Once the solar robots populate the Moon, the electricity they generate could be used to produce hydrogen from water so that we could fuel hydrogen rockets for further travels into the solar system.

My question is: would we consider self-replicating solar robot colonies to be living organisms? I think that self-replicating solar robots fit the definition of life, but I admit that it'd be a primitive form of life unless they could modify their computer program, i.e. their "DNA".

What would be interesting would be to compare the size of the computer program required to be encoded into each robot vs. the size of DNA for blue-green algae. My guess is the sizes of each would be comparable.

In the future, I plan to calculate the return on investment and the rate of return of the self-replicating solar robots in order to estimate how quickly they could cover the entire surface of the Moon.

A back of the envelope calculation of return on investment can be made from: 1) the exergy requirement for producing Si, Al and Fe; 2) the collection efficiency of each robot; 3) the life time that each robot lives; and 4) the amount of Si, Al, Fe required for each robot.

I've been interested in the idea of sending self-replicating solar robots to the Moon for awhile, but just recently, I read an article on this topic by professor Klaus Lackner. He calls his self-replicating robots, auxons.

While the idea has been around for awhile, it looks like Klaus Lackner and his co-authors (Darryl Butt and Christopher Wendt) have done the best job of thinking through all of the chemical reaction that must occur to derive the materials needed produce self-replicating robots.

A self-replicating robot has to collect enough electricity from sunlight in order to be able to build a duplicate of itself. I've defined the work return on investment as the ratio of the total net work (electrical work in this case) generated over the lifetime of the machine divided by the total exergy (electricity also in this case) to build the machine. A self-replicating solar robot requires a 'work' return on investment greater than one, and for the robot colony to grow the return on investment must be much greater than one. Also important is the rate of return on investment, which is related to the net electricity generated per unit time divided by the upfront electricity consumed in building the solar auxon. (The "net" in the numerator means the gross electricity produced minus reoccurring electricity expenses associated perhaps with labor, maintenance, fuel costs. For the solar auxons, the reoccurring work is the electricity required to move, repair, and clean.) The rate of return is typical given in units of %/yr. You can invert the rate of return to approximately calculate the pay-back time and then double that value to find the time required to double the population of the solar robots.

Lackner and his co-authors found that the solar colony could double in size every few months. Though, their numbers here seem to rather optimistic because the payback time for solar PV panels is on the order of magnitude of 10 years, and that's using PV solar cells chemistry that probably consumes less electricity than the cell chemistry that would have to be used by the self-replicating robots. On the other hand, the doubling rate for blue-green algae is on the order of 20 hours. I have yet to check their numbers, so I am just speculating right now. (I'd like to do a full analysis, perhaps as a class project in a future course I teach.)

I'm particularly interested in the self-replicating solar robots because they seem to be the best way of populating the Moon. (Unlike Mars, it seems unlikely that water-carbon-based lifeforms, such as algae, could survive on the Moon. We could probably populate Mars with the introduction of the strong greenhouse gases [to melt the ice caps] and some algae from the Earth.) If the Moon could be covered with solar robots, we could possibly use the Moon as a staging ground for further exploration of the solar system.

The chemical composition of the Moon (depending on location) is roughly 45% silica (SiO2), 20% alumina (Al2O3) and 10% iron oxide (FeO). The metals in these three materials would make up the main components of the self-replicating robots (Si, Al, & Fe). As Lackner found, one would have to develop innovative chemical processing techniques in order to make the Si, Al and Fe from the oxides.

While it's possible to produce silicon from some electro-chemical reactions, the robots could also use electricity to run a high temperature plasma arc that heats the silicon dioxide to 4000 K, at which point the oxygen is released in the moon's atmosphere. Unfortunately, the oxygen won't stay up there that long because the moon can't hold on to its gases as long as the Earth can. (I calculated that the probability of an oxygen molecules escaping from the Moon's gravity is 424 million times larger than the probability of an oxygen molecules escaping from the Earth's gravity.) Although, it still may be possible to build up a sizable pressure of O2 in the atmosphere once the self-replicating robots start to cover the entire Moon.

Once the solar robots populate the Moon, the electricity they generate could be used to produce hydrogen from water so that we could fuel hydrogen rockets for further travels into the solar system.

My question is: would we consider self-replicating solar robot colonies to be living organisms? I think that self-replicating solar robots fit the definition of life, but I admit that it'd be a primitive form of life unless they could modify their computer program, i.e. their "DNA".

What would be interesting would be to compare the size of the computer program required to be encoded into each robot vs. the size of DNA for blue-green algae. My guess is the sizes of each would be comparable.

In the future, I plan to calculate the return on investment and the rate of return of the self-replicating solar robots in order to estimate how quickly they could cover the entire surface of the Moon.

A back of the envelope calculation of return on investment can be made from: 1) the exergy requirement for producing Si, Al and Fe; 2) the collection efficiency of each robot; 3) the life time that each robot lives; and 4) the amount of Si, Al, Fe required for each robot.

## Sunday, December 5, 2010

### Pi In The Sky

Pi In The Sky:

A Review of "Where Mathematics Comes From"

I found this book to be quite interesting because it attempts to answer the question: where does our knowledge of abstract concepts in mathematics come from? Their quite unique answer is that abstract concepts in mathematics (i.e. a priori knowledge) can not be learned without a significant amount of sense data (i.e. a posteriori knowledge). I suggest reading the book whenever you have the time; it's well worth your time.

Summary of the Book: Eminent psychologists challenge the long-held conviction that mathematics exists on a transcendent plane above humans

If a tree falls in the woods and nobody is there, does it make a sound? Did the number pi exist before humans studied circles? The question of whether mathematicians discover or create is still a heated topic of debate. Mathematics, in all its transcendent beauty, is assumed to have an objective existence external to human beings. This isn’t the case, according to cognitive psychologists George Lakoff and Rafael Núñez. They believe that mathematics, like all human endeavors, “must be biologically based.” In the book, Where Mathematics Comes From, the authors have the goal of convincing both lay readers and mathematicians alike that cognitive psychology will explain the ways in which we learn mathematics. For them, mathematics exists only as a tool for the human mind to use when perceiving the world.

As simple as this sounds, we should consider ourselves lucky to have such a gift. Their research suggests that only a few animals have the ability to count. Our mind formulates integrals, parabolas and rational numbers by expanding on our innate ability to count. Lakeoff and Núñez argue that mathematics is fundamentally a human enterprise, arising from basic human activities. We can only comprehend mathematical concepts, such as imaginary numbers, through metaphors that compare these concepts to activities in real life.

Lakeoff and Núñez take us on a journey through mathematics’ past and cognitive psychology’s future in an attempt to bring mathematics down to earth. We start in a laboratory watching rats and children in counting experiments and we eventually work our way through the minds of great mathematicians, such as George Boole and Leonard Euler.

In their study of the history of mathematics, the authors observed that each layer of mathematics built upon the ones below. Like the iron and steel that holds together modern skyscrapers, a series of grounding and linking metaphors holds mathematics together. The authors uncovered a few grounding metaphors from which we would not be able to visualize or understand concepts like addition or multiplication. These metaphors are the concrete foundation for mathematics. Linking metaphors connect the different fields of mathematics, such as arithmetic, algebra, geometry, trigonometry, calculus, and complex analysis, just as elevators connect the different stories of a skyscraper.

As cognitive scientists go, Lakeoff and Núñez are as different as they get. George Lakeoff has been studying linguistics and the neural theory of language since the 60s. Rafael Núñez is just beginning a promising career in the field, having devoted the last decade studying the origin of mathematical concepts. What they share in common is a Berkeley Psychology department and an interest in learning where mathematics originates in our mind.

Here is an example of the beauty of mathematics, which happens to be the climax of their book.

e to the i time pi is equal to negative one

e^i*pi = -1

The Euler Equation has remained to this date mysterious, and even somewhat mystical, because it is by no means instinctive to believe that raising an irrational number to an imaginary, irrational power could possible yield negative one! One would expect the result to be imaginary, or at least beyond our comprehension. Lakeoff and Núñez argue that pi, i, and e are more than just numbers. They represent ideas. These ideas can only be grasped by a human mind that understands concepts such as periodicity and rotation.

The authors devote such a large section of their book to the Euler equation because this equation combines many branches of mathematics and therefore is a good place to locate linking metaphors between these branches. To give a flavor for the Euler equation, a few of the metaphors used to understand the numbers i and pi will be presented. The beautiful proof of the equation is left for the authors.

The concept of pi from trigonometry is not the same as the concept of pi that comes from geometry. Pi is no longer just the ratio of the circumference to diameter on a circle nor does it just take on the value of 3.14.159…. It is also a measure of periodicity for recurrent phenomena. To understand pi’s importance to periodicity, image yourself on a circle. Any point will do. Image traveling around the circle. How far do you travel before you return to the same point on the circle? It depends on the radius of the circle, so let’s just say that it’s a circle of length of one unit. You must travel through a distance of two pi before reaching one’s starting point on any circle. In the same way, one must travel an octave to reach the same note on a musical scale. To reach the point opposite from you on the circle, you must travel a distance of pi. Outside of the realm of mathematics, we are left with a language that describes recurrence through the circular metaphor, as can be seen by the line, “I can’t wait for the holiday season to come around again.”

The number i is altogether a mysterious quantity. It is by definition the square root of negative one. If you think about this, i can’t be a positive number. A positive number times another positive number is still positive. However, i times i is negative. By the same logic, i can’t be negative because a negative number times a negative number is positive. So, if it can’t be positive or negative, and it definitely isn’t zero, then it can’t be a real number.

Instead of focusing on the number i, it will be more important to understand multiplication by i. We begin by visualizing the number line. This requires having learned the grounding metaphor that numbers can be represented as points on a line. A real number is one that can be visualized on a number line. From there, we need to add a second dimension, which can be done by drawing an axis perpendicular to the first. For real numbers we have the intuitive understanding that multiplication by negative one means finding a number symmetric with the respect to the origin, which is the same as saying multiplication by negative one equals a rotation by 180 degrees in these 2-D plane. Multiplication by i times i is the same as multiplying by negative one. Two rotations of 90 degrees equal to 180 degrees. So, multiplication by i is the same as rotation by 90 degrees. This means that real number on the first number line move to the second axis when they are multiplied by i.

This explanation of i and pi provides enough background to understand Lakeoff and Núñez’s proof of the Euler Equation using the metaphors found earlier in their book. What they show is that Euler and all mathematicians since who have proved this equation could only have done so by learning important metaphors relating mathematics to physical concepts such as lines, rotation and periodicity.

It is fascinating to wonder whether every theorem and proof in mathematics is written down in one Book, drawn from by a few inspired mathematicians. This seems unlikely though because this would mean that the metaphors we use are universal. A species living on a distinct planet probably does not visualize numbers as points on a line. Perhaps for them numbers are represented as stars in the sky. It could ever be possible for them to have a high level of mathematical sophistication without the use of pi.

A Review of "Where Mathematics Comes From"

I found this book to be quite interesting because it attempts to answer the question: where does our knowledge of abstract concepts in mathematics come from? Their quite unique answer is that abstract concepts in mathematics (i.e. a priori knowledge) can not be learned without a significant amount of sense data (i.e. a posteriori knowledge). I suggest reading the book whenever you have the time; it's well worth your time.

Summary of the Book: Eminent psychologists challenge the long-held conviction that mathematics exists on a transcendent plane above humans

If a tree falls in the woods and nobody is there, does it make a sound? Did the number pi exist before humans studied circles? The question of whether mathematicians discover or create is still a heated topic of debate. Mathematics, in all its transcendent beauty, is assumed to have an objective existence external to human beings. This isn’t the case, according to cognitive psychologists George Lakoff and Rafael Núñez. They believe that mathematics, like all human endeavors, “must be biologically based.” In the book, Where Mathematics Comes From, the authors have the goal of convincing both lay readers and mathematicians alike that cognitive psychology will explain the ways in which we learn mathematics. For them, mathematics exists only as a tool for the human mind to use when perceiving the world.

As simple as this sounds, we should consider ourselves lucky to have such a gift. Their research suggests that only a few animals have the ability to count. Our mind formulates integrals, parabolas and rational numbers by expanding on our innate ability to count. Lakeoff and Núñez argue that mathematics is fundamentally a human enterprise, arising from basic human activities. We can only comprehend mathematical concepts, such as imaginary numbers, through metaphors that compare these concepts to activities in real life.

Lakeoff and Núñez take us on a journey through mathematics’ past and cognitive psychology’s future in an attempt to bring mathematics down to earth. We start in a laboratory watching rats and children in counting experiments and we eventually work our way through the minds of great mathematicians, such as George Boole and Leonard Euler.

In their study of the history of mathematics, the authors observed that each layer of mathematics built upon the ones below. Like the iron and steel that holds together modern skyscrapers, a series of grounding and linking metaphors holds mathematics together. The authors uncovered a few grounding metaphors from which we would not be able to visualize or understand concepts like addition or multiplication. These metaphors are the concrete foundation for mathematics. Linking metaphors connect the different fields of mathematics, such as arithmetic, algebra, geometry, trigonometry, calculus, and complex analysis, just as elevators connect the different stories of a skyscraper.

As cognitive scientists go, Lakeoff and Núñez are as different as they get. George Lakeoff has been studying linguistics and the neural theory of language since the 60s. Rafael Núñez is just beginning a promising career in the field, having devoted the last decade studying the origin of mathematical concepts. What they share in common is a Berkeley Psychology department and an interest in learning where mathematics originates in our mind.

Here is an example of the beauty of mathematics, which happens to be the climax of their book.

e to the i time pi is equal to negative one

e^i*pi = -1

The Euler Equation has remained to this date mysterious, and even somewhat mystical, because it is by no means instinctive to believe that raising an irrational number to an imaginary, irrational power could possible yield negative one! One would expect the result to be imaginary, or at least beyond our comprehension. Lakeoff and Núñez argue that pi, i, and e are more than just numbers. They represent ideas. These ideas can only be grasped by a human mind that understands concepts such as periodicity and rotation.

The authors devote such a large section of their book to the Euler equation because this equation combines many branches of mathematics and therefore is a good place to locate linking metaphors between these branches. To give a flavor for the Euler equation, a few of the metaphors used to understand the numbers i and pi will be presented. The beautiful proof of the equation is left for the authors.

The concept of pi from trigonometry is not the same as the concept of pi that comes from geometry. Pi is no longer just the ratio of the circumference to diameter on a circle nor does it just take on the value of 3.14.159…. It is also a measure of periodicity for recurrent phenomena. To understand pi’s importance to periodicity, image yourself on a circle. Any point will do. Image traveling around the circle. How far do you travel before you return to the same point on the circle? It depends on the radius of the circle, so let’s just say that it’s a circle of length of one unit. You must travel through a distance of two pi before reaching one’s starting point on any circle. In the same way, one must travel an octave to reach the same note on a musical scale. To reach the point opposite from you on the circle, you must travel a distance of pi. Outside of the realm of mathematics, we are left with a language that describes recurrence through the circular metaphor, as can be seen by the line, “I can’t wait for the holiday season to come around again.”

The number i is altogether a mysterious quantity. It is by definition the square root of negative one. If you think about this, i can’t be a positive number. A positive number times another positive number is still positive. However, i times i is negative. By the same logic, i can’t be negative because a negative number times a negative number is positive. So, if it can’t be positive or negative, and it definitely isn’t zero, then it can’t be a real number.

Instead of focusing on the number i, it will be more important to understand multiplication by i. We begin by visualizing the number line. This requires having learned the grounding metaphor that numbers can be represented as points on a line. A real number is one that can be visualized on a number line. From there, we need to add a second dimension, which can be done by drawing an axis perpendicular to the first. For real numbers we have the intuitive understanding that multiplication by negative one means finding a number symmetric with the respect to the origin, which is the same as saying multiplication by negative one equals a rotation by 180 degrees in these 2-D plane. Multiplication by i times i is the same as multiplying by negative one. Two rotations of 90 degrees equal to 180 degrees. So, multiplication by i is the same as rotation by 90 degrees. This means that real number on the first number line move to the second axis when they are multiplied by i.

This explanation of i and pi provides enough background to understand Lakeoff and Núñez’s proof of the Euler Equation using the metaphors found earlier in their book. What they show is that Euler and all mathematicians since who have proved this equation could only have done so by learning important metaphors relating mathematics to physical concepts such as lines, rotation and periodicity.

It is fascinating to wonder whether every theorem and proof in mathematics is written down in one Book, drawn from by a few inspired mathematicians. This seems unlikely though because this would mean that the metaphors we use are universal. A species living on a distinct planet probably does not visualize numbers as points on a line. Perhaps for them numbers are represented as stars in the sky. It could ever be possible for them to have a high level of mathematical sophistication without the use of pi.

## Saturday, November 20, 2010

### The Next Force of Nature: SU(4) ?

The goal of this post is to discuss whether there is another force of nature. In particular, this post will discuss whether there is a force of nature associated with the symmetry SU(4). I will discuss the reason both for and against a force described by the Lie Algebra SU(4) or U(4). Such a force would have 15 or 16 exchange particles. This force of nature might tell us that electrons, neutrinos, and quarks aren't elementary particles, in other words, that these particles are composed smaller, more fundamental particles. We might need this force to tell us the masses and charge of what we are today calling elementary particles.

Before going into this theory, I want to give a summary of my understanding of the four forces of nature found so far, and then at the end I'll discuss why I think that there are likely only four forces of nature (3 of which are time symmetric and 1 of which is time assymmetric), and why is this likely due to the fact that there are only 4 normed division algebras.

There are four known forces: gravity (G), electromagnetism(E&M), weak nuclear (WN) & strong nuclear (SN). There are many similarities between the forces, and some interesting differences between them, when they separate out at low enough temperatures.

Here is a summary before going into the details: (the mathematical terms below can be searched for in Wikipedia. I'll add links to them soon.)

Gravity: Probably no exchange particle, only positive mass (i.e. no negative mass bodies), associated division algebra is addition and multiplication by the real numbers (commutativity, associativity, and the vectors have length.) Superposition always holds. Associated Lie Algebra is SO(1)...the unit point. Parity is conserved and most likely time is reversible for all interactions involving gravity.

E&M: One exchange particle (the photon); positive or negative charge; associated division algebra is addition and multiplication of the complex number. (commutativity, associativity, and the vectors have length.) Superposition always holds. Algebra here is abelian. Associated Lie Algebra is U(1)...the unit circle. Parity is conserved and most likely time is reversible for all interactions involving E&M.

Weak Nuclear: Three exchange particles, W+, W-, & Z. associated division algebra is addition and multiplication of the quaternions (x+iy+jz+kw). (no commutativity, but associativity and the vectors have length.) Superposition does not always hold. The associated Lie Algebra is SU(2)...similar to the surface of a sphere. Non-abelian algebra. Parity is not conserved, and there is no time reflection symmetry. This means that interactions involving the weak nuclear force can be irreversible. x → -x and t → -t are not symmetry operators for this force. Other interesting facts about the weak nuclear force are the fact that the weak force can convert one type of quark into another type of quark (i.e. strangeness is not conserved in weak interactions) and that the weak nuclear force can distinguish between left and right handed particles. (i.e. parity is not conserved as mentioned above)

Strong Nuclear Force: Eight exchange particles, (the eight gluons). The associated division algebra appears to be the octonions (x+iy+jz+kw+...four more directions) (no multiplication commutativity, no multiplication associativity, but the vectors have length.) The associated Lie Algebra is SU(3). SU(3) has 8 operators, which would made sense with the eight gluons. So far, the evidence suggests that C,P,& T are valid symmetry operators of the strong nuclear force. The problem is that we don't really know why this is the case. The SU(3) Lie Algebra is much more complicated than SU(2) algebra, so it's not clear why the weak nuclear force is space-time asymmetric, whereas the strong nuclear force is space-time symmetric. THis is called the Strong CP Paradox.

________________________________________________

So, here's the same info, but in more details.

E&M is a linear force in the sense that 'superposition' is always valid for charges that obey this force of nature. There is only one exchange particle for E&M, the photon. The symmetry describing E&M is U(1), which is the symmetry of the unit circle. There is only one variable to describe one's position on the unit circle (angle), and hence, there's only one exchange particle for the E&M force. (The photon has no mass, and I believe that this is partly due to the fact that E&M is a 'linear' force of nature, i.e. Abelian.) The mathematics associated with E&M is that of complex numbers (real number plus imaginary numbers). Commutativity and associativity hold for complex numbers, and therefore, they hold for E&M interactions. The E&M force is really like the wrinkles in a table cloth when you make a circular twist in the middle of the cloth, and hold the ends fixed. This is like changing the phase of an oscillating electron. The change in phase must be communicated to the rest of the world, and the photon is the carried of this information in the twisting of U(1) space-time at the location of the charged particle.

The weak nuclear is the first 'non-linear' force of nature. There are three exchange particles (Z ,W-, & W+), all of which have mass. The weak nuclear force is non-Abelian, which means that the order of operation effects of the outcome of multiplying operators (which could be represented by matrices, which are well known to be non-Abelian.) The WN force has the Lie Algebra symmetry of SU(2), which is similar to the symmetry of the surface of a sphere. A sphere's surface has two angles to describe one's location on the globe and one angle to describe the orientation of the globe about an axis through the center. (The operators associated with moving a given angle do not commute with each other, which can be easily demonstrated by rotating a book 90 degrees about one axis, then 90 degrees about another axis, then -90 degrees about the first axis, and finally -90 about the second axis. Note: you don't end up where you started.) Interestingly, other symmetries that hold for the E&M force, don't hold for the WN force, such as parity and superposition. And this is tied back, once again, to the fact that the WN force is non-Abelian. The mathematics associated with the WN force is quaternions, (real numbers plus the i,j,k axes) in which associativity holds, but where commutativity does not hold. The weak nuclear force explains how quarks change flavor, but not color. (i.e. the force can turn an up quark into a down quark, but it doesn't change the color of the quark.) Interestingly, the weak nuclear force does not obey spatial or temporary reflection symmetry. This means that the weak nuclear force is irreversible, and this may be one reason that time appears to go in one direction. In technical terms, the weak nuclear force does not have P or T symmetry. The weak nuclear force is a weird one because it only acts on left-handed particles or right-handed anti-particles. And another weird aspect of the weak nuclear force is that it can violate the conservation of strangeness or charmness because it can convert one type of quark into another quark.

The strong nuclear force is also 'non-linear' , but even more 'non-linear' in the fact that the mathematics of the SN force is similar to the octonions (real numbers plus i,j,k,l,m,n,o). Neither associativity nor commutativity hold for multiplying octonions together. There are eight exchange particles for the strong nuclear force (which binds together quarks). The force is so strong that it also binds together quark-sets (like protons and neutrons) and we have yet to see clear evidence of a lone quark. As with the WN, the principle of superposition does not hold for the SN force, which is part of the reason that it's difficult to solve problems here. You can't solve part of the problem and then add it to another part of the problem you solved earlier. (Like trying to solve non-linear differential equations.) One interesting question is why there have been no violations of P or T symmetry (Parity or Time reflection) for the strong nuclear force, given that it is even more non-linear than the nuclear force.

So, I haven't discussed gravity yet, even though I should have placed it first. In my understanding, there is likely no exchange particle for gravity, and it follows the symmetry SO(1)...a point... which really means that there is no variable (exchange particle) associated with the mathematics (force). The mathematics of gravity is similar to the mathematics of the real numbers. Associativity and commutativity both hold here, and the mathematics is even easier to learn and apply than the complex numbers. Superposition and parity always hold here as well. Gravity is the curving of space-time by Lorentz transformations and there is no exchange particle associated with this curving of space time. There is no plus or minus charge (i.e. mass), as there are positive and negative electrical charges in E&M. There's only mass, and this mass warps time&space. (Where mass is the sum of the "rest" mass of a particle and its mass due to the energy of the particle.) In some ways, Einstein's theory of general relativity can be seen as local gauge theory just like the local gauge theories discussed above for E&M, WN & SN. If the laws of physics are the same for all observers (even those observers that are accelerating), then there must be a gravitational field. The gravitational field is the curvature of space-time due to mass/energy.

But we still have some unanswered questions left in the realm of physics. There appears to be a missing force (or a missing equation) that would tell us how to predict the masses and charges of quarks (all six types), electrons (muons, taon) and neutrinos (all three types). There appears to be too much coincidence in the size of the families and in the charges of the 'elementary' particles (+/-1, +/-2/3, +/-1/3, and +/- 0) (Electrons, Up quarks, Down quarks, & neutrino, respectively). It appears that there is a particle with a charge of +/-1/3 or perhaps +/-1/6 of the charge of an electron that is even more elementary than the ones listed above. Pairs and/or groups of this elemental particle (and its antiparticle) would then determine the total charge, and whether the collective particles (listed above) feel the E&M, WN or SN forces. All of the particles feel the force of gravity because it is intrinsic to all particles with energy.

I believe that there might be a missing force that describes the bonding of particles that make up an electrons, neutrinos, up quarks, and down quarks (and their related families of particles.) My guess is that the electron is not a fundamental particle, but rather that it consists of smaller particles that 'bond' to form either electrons/muons/tauons. And how the particles are bonded determine whether it's in an electron, muon, or tauon state. The tauon would be an excited electron in a similar way that 1s5 is an excited state of argon. We don't say that 1s5 is a new atom, just an excited state that will decay back to the ground state of argon. It's not clear yet to me (since I don't know how to predict the masses of the tauon/muon/electron) whether there are higher energy state available to the electron.

I believe that the same holds true for the neutrinos and quarks. I think that the neutrinos and quarks might not be fundamental particles, but rather they are made up of particles bound together by another force, which might have the symmetry of SU(4). How the particles inside of the quarks are bound together determines their energy, and hence their rest mass.

So, what can we say about such a force? My guess is that this force has the symmetry of SU(4) or U(4), which means that there would be either 15 or 16 exchange particles with this force. The mathematics associated with this next force is probably the hexagonions (also called sedenions) and has 16 axes. This mathematics is even 'weirder' than octonions because, not only do associativity and commutativity not hold, but the there is no 'normed' vector space, which means that we can't use Euclidean geometry to determine the length of a vector in this 16-D space. In hexagonion algebra, you can multiply two non-zero numbers together and obtain the zero element, which is impossible for real numbers, complex numbers, quaternions or octonions. Length has no real meaning in this sixteen dimensional vector space. And this is a major reason to think that there is no force associated with such a twisting of SU(4) space. (This is a major reason why)

The hexagonion algebra is called a non-ring division algebra, and understanding it is even more difficult than octonions. But just because it's difficult to understand, doesn't mean that we can completely ignore it. We need to understand how to predict the masses of the electron/muon/tauon, the neutrinos, and the quarks. The mathematics associated with the force holding together the particles that make up an electron or a down quark might be described by the hexagonions, and it will therefore be quite difficult to make sense of what's going on. (especially because there's no normed vector space associated with this force.)

And so this line of reasoning begs the question, are there more forces past this SU(4) force? I'm not really sure, and beyond this already unseen force, we'll have to wait to see if we can ever find the points/strings/loops that make up an electron, a neutrino or a quark. Since it's difficult/impossible to find a quark by itself, I'm guessing that we'll find the particles inside an electron first. So, are these particles composed of even smaller particles that obey a force similar to SU(5) or U(5)? We have no hope right now of determining the answer to that question, but we can speculate.

Speculations goes as follows: The number of exchange particles seems to follow a rule of n squared, (or n squared minus one) (gravity, n=0, and no exchange forces; E&M, n=1, and one exchange particle; WN, n=2, and 3 exchange particles; SN, n=3, and 8 exchange particles; S(4) force, n=4, and 15/16 exchange particles, ??, n=5, 24/25 exchange particles.) You can see a close (but not perfect) relation between the number of exchange particles and the size of Cayley-Dickinson algebras 1=real, 2=imaginary, 4=quaternions, 8=octonions; and the continuing set... 16=hexagonions, 32=trigintaduonions, .when n<5. n squared and 2 to the power n start to diverge quickly starting with n=5. As well, the Cayley-Dickinson algebras of m=32, 64, 128 start to lose even more structure associated with the algebras that we are 'familiar with.' For example, once you reach the Cayley-Dickinson algebras with at least 32 operators, you start losing the rules of associativity in addition, and therefore, I believe the it's unlikely that the higher Cayley-Dickinson algebras will have corresponding forces of nature; but just because we're not familiar with it, doesn't mean it doesn't exist.

So, while I strongly believe that we will be able to eventually predict the color, masses, and charge of quarks, electrons and neutrinos, I'm still not sure whether there are more forces beyond the four known forces. There are real reason to believe that we are missing a really strong force is that we can not predict the masses or charges of tauons/muons/electrons, neutrinos and all of the quarks, but we need to see whether this new force can predict the masses, and if not, then we can start looking into forces beyond the SU(4) force discussed above.

While the charge of the neutron, quark and electron seem way too coincidental, there are reasons to believe that electrons are point-particles (rather than composite particles). For example, QED (and its ability to predict electron-photon interacts down to the ~8th decimal) assumes that electrons are point particles. But still, why do quarks have -1/3, and 2/3 of the charge of an electron. We are left with the feeling that there is still something very fundamental that we don't understand about the forces of nature, the cause of the rest mass of particles, and the cause of the charge of particles.

I'd like to conclude with the following open questions:

Before going into this theory, I want to give a summary of my understanding of the four forces of nature found so far, and then at the end I'll discuss why I think that there are likely only four forces of nature (3 of which are time symmetric and 1 of which is time assymmetric), and why is this likely due to the fact that there are only 4 normed division algebras.

There are four known forces: gravity (G), electromagnetism(E&M), weak nuclear (WN) & strong nuclear (SN). There are many similarities between the forces, and some interesting differences between them, when they separate out at low enough temperatures.

Here is a summary before going into the details: (the mathematical terms below can be searched for in Wikipedia. I'll add links to them soon.)

Gravity: Probably no exchange particle, only positive mass (i.e. no negative mass bodies), associated division algebra is addition and multiplication by the real numbers (commutativity, associativity, and the vectors have length.) Superposition always holds. Associated Lie Algebra is SO(1)...the unit point. Parity is conserved and most likely time is reversible for all interactions involving gravity.

E&M: One exchange particle (the photon); positive or negative charge; associated division algebra is addition and multiplication of the complex number. (commutativity, associativity, and the vectors have length.) Superposition always holds. Algebra here is abelian. Associated Lie Algebra is U(1)...the unit circle. Parity is conserved and most likely time is reversible for all interactions involving E&M.

Weak Nuclear: Three exchange particles, W+, W-, & Z. associated division algebra is addition and multiplication of the quaternions (x+iy+jz+kw). (no commutativity, but associativity and the vectors have length.) Superposition does not always hold. The associated Lie Algebra is SU(2)...similar to the surface of a sphere. Non-abelian algebra. Parity is not conserved, and there is no time reflection symmetry. This means that interactions involving the weak nuclear force can be irreversible. x → -x and t → -t are not symmetry operators for this force. Other interesting facts about the weak nuclear force are the fact that the weak force can convert one type of quark into another type of quark (i.e. strangeness is not conserved in weak interactions) and that the weak nuclear force can distinguish between left and right handed particles. (i.e. parity is not conserved as mentioned above)

Strong Nuclear Force: Eight exchange particles, (the eight gluons). The associated division algebra appears to be the octonions (x+iy+jz+kw+...four more directions) (no multiplication commutativity, no multiplication associativity, but the vectors have length.) The associated Lie Algebra is SU(3). SU(3) has 8 operators, which would made sense with the eight gluons. So far, the evidence suggests that C,P,& T are valid symmetry operators of the strong nuclear force. The problem is that we don't really know why this is the case. The SU(3) Lie Algebra is much more complicated than SU(2) algebra, so it's not clear why the weak nuclear force is space-time asymmetric, whereas the strong nuclear force is space-time symmetric. THis is called the Strong CP Paradox.

________________________________________________

So, here's the same info, but in more details.

E&M is a linear force in the sense that 'superposition' is always valid for charges that obey this force of nature. There is only one exchange particle for E&M, the photon. The symmetry describing E&M is U(1), which is the symmetry of the unit circle. There is only one variable to describe one's position on the unit circle (angle), and hence, there's only one exchange particle for the E&M force. (The photon has no mass, and I believe that this is partly due to the fact that E&M is a 'linear' force of nature, i.e. Abelian.) The mathematics associated with E&M is that of complex numbers (real number plus imaginary numbers). Commutativity and associativity hold for complex numbers, and therefore, they hold for E&M interactions. The E&M force is really like the wrinkles in a table cloth when you make a circular twist in the middle of the cloth, and hold the ends fixed. This is like changing the phase of an oscillating electron. The change in phase must be communicated to the rest of the world, and the photon is the carried of this information in the twisting of U(1) space-time at the location of the charged particle.

The weak nuclear is the first 'non-linear' force of nature. There are three exchange particles (Z ,W-, & W+), all of which have mass. The weak nuclear force is non-Abelian, which means that the order of operation effects of the outcome of multiplying operators (which could be represented by matrices, which are well known to be non-Abelian.) The WN force has the Lie Algebra symmetry of SU(2), which is similar to the symmetry of the surface of a sphere. A sphere's surface has two angles to describe one's location on the globe and one angle to describe the orientation of the globe about an axis through the center. (The operators associated with moving a given angle do not commute with each other, which can be easily demonstrated by rotating a book 90 degrees about one axis, then 90 degrees about another axis, then -90 degrees about the first axis, and finally -90 about the second axis. Note: you don't end up where you started.) Interestingly, other symmetries that hold for the E&M force, don't hold for the WN force, such as parity and superposition. And this is tied back, once again, to the fact that the WN force is non-Abelian. The mathematics associated with the WN force is quaternions, (real numbers plus the i,j,k axes) in which associativity holds, but where commutativity does not hold. The weak nuclear force explains how quarks change flavor, but not color. (i.e. the force can turn an up quark into a down quark, but it doesn't change the color of the quark.) Interestingly, the weak nuclear force does not obey spatial or temporary reflection symmetry. This means that the weak nuclear force is irreversible, and this may be one reason that time appears to go in one direction. In technical terms, the weak nuclear force does not have P or T symmetry. The weak nuclear force is a weird one because it only acts on left-handed particles or right-handed anti-particles. And another weird aspect of the weak nuclear force is that it can violate the conservation of strangeness or charmness because it can convert one type of quark into another quark.

The strong nuclear force is also 'non-linear' , but even more 'non-linear' in the fact that the mathematics of the SN force is similar to the octonions (real numbers plus i,j,k,l,m,n,o). Neither associativity nor commutativity hold for multiplying octonions together. There are eight exchange particles for the strong nuclear force (which binds together quarks). The force is so strong that it also binds together quark-sets (like protons and neutrons) and we have yet to see clear evidence of a lone quark. As with the WN, the principle of superposition does not hold for the SN force, which is part of the reason that it's difficult to solve problems here. You can't solve part of the problem and then add it to another part of the problem you solved earlier. (Like trying to solve non-linear differential equations.) One interesting question is why there have been no violations of P or T symmetry (Parity or Time reflection) for the strong nuclear force, given that it is even more non-linear than the nuclear force.

So, I haven't discussed gravity yet, even though I should have placed it first. In my understanding, there is likely no exchange particle for gravity, and it follows the symmetry SO(1)...a point... which really means that there is no variable (exchange particle) associated with the mathematics (force). The mathematics of gravity is similar to the mathematics of the real numbers. Associativity and commutativity both hold here, and the mathematics is even easier to learn and apply than the complex numbers. Superposition and parity always hold here as well. Gravity is the curving of space-time by Lorentz transformations and there is no exchange particle associated with this curving of space time. There is no plus or minus charge (i.e. mass), as there are positive and negative electrical charges in E&M. There's only mass, and this mass warps time&space. (Where mass is the sum of the "rest" mass of a particle and its mass due to the energy of the particle.) In some ways, Einstein's theory of general relativity can be seen as local gauge theory just like the local gauge theories discussed above for E&M, WN & SN. If the laws of physics are the same for all observers (even those observers that are accelerating), then there must be a gravitational field. The gravitational field is the curvature of space-time due to mass/energy.

But we still have some unanswered questions left in the realm of physics. There appears to be a missing force (or a missing equation) that would tell us how to predict the masses and charges of quarks (all six types), electrons (muons, taon) and neutrinos (all three types). There appears to be too much coincidence in the size of the families and in the charges of the 'elementary' particles (+/-1, +/-2/3, +/-1/3, and +/- 0) (Electrons, Up quarks, Down quarks, & neutrino, respectively). It appears that there is a particle with a charge of +/-1/3 or perhaps +/-1/6 of the charge of an electron that is even more elementary than the ones listed above. Pairs and/or groups of this elemental particle (and its antiparticle) would then determine the total charge, and whether the collective particles (listed above) feel the E&M, WN or SN forces. All of the particles feel the force of gravity because it is intrinsic to all particles with energy.

I believe that there might be a missing force that describes the bonding of particles that make up an electrons, neutrinos, up quarks, and down quarks (and their related families of particles.) My guess is that the electron is not a fundamental particle, but rather that it consists of smaller particles that 'bond' to form either electrons/muons/tauons. And how the particles are bonded determine whether it's in an electron, muon, or tauon state. The tauon would be an excited electron in a similar way that 1s5 is an excited state of argon. We don't say that 1s5 is a new atom, just an excited state that will decay back to the ground state of argon. It's not clear yet to me (since I don't know how to predict the masses of the tauon/muon/electron) whether there are higher energy state available to the electron.

I believe that the same holds true for the neutrinos and quarks. I think that the neutrinos and quarks might not be fundamental particles, but rather they are made up of particles bound together by another force, which might have the symmetry of SU(4). How the particles inside of the quarks are bound together determines their energy, and hence their rest mass.

So, what can we say about such a force? My guess is that this force has the symmetry of SU(4) or U(4), which means that there would be either 15 or 16 exchange particles with this force. The mathematics associated with this next force is probably the hexagonions (also called sedenions) and has 16 axes. This mathematics is even 'weirder' than octonions because, not only do associativity and commutativity not hold, but the there is no 'normed' vector space, which means that we can't use Euclidean geometry to determine the length of a vector in this 16-D space. In hexagonion algebra, you can multiply two non-zero numbers together and obtain the zero element, which is impossible for real numbers, complex numbers, quaternions or octonions. Length has no real meaning in this sixteen dimensional vector space. And this is a major reason to think that there is no force associated with such a twisting of SU(4) space. (This is a major reason why)

The hexagonion algebra is called a non-ring division algebra, and understanding it is even more difficult than octonions. But just because it's difficult to understand, doesn't mean that we can completely ignore it. We need to understand how to predict the masses of the electron/muon/tauon, the neutrinos, and the quarks. The mathematics associated with the force holding together the particles that make up an electron or a down quark might be described by the hexagonions, and it will therefore be quite difficult to make sense of what's going on. (especially because there's no normed vector space associated with this force.)

And so this line of reasoning begs the question, are there more forces past this SU(4) force? I'm not really sure, and beyond this already unseen force, we'll have to wait to see if we can ever find the points/strings/loops that make up an electron, a neutrino or a quark. Since it's difficult/impossible to find a quark by itself, I'm guessing that we'll find the particles inside an electron first. So, are these particles composed of even smaller particles that obey a force similar to SU(5) or U(5)? We have no hope right now of determining the answer to that question, but we can speculate.

Speculations goes as follows: The number of exchange particles seems to follow a rule of n squared, (or n squared minus one) (gravity, n=0, and no exchange forces; E&M, n=1, and one exchange particle; WN, n=2, and 3 exchange particles; SN, n=3, and 8 exchange particles; S(4) force, n=4, and 15/16 exchange particles, ??, n=5, 24/25 exchange particles.) You can see a close (but not perfect) relation between the number of exchange particles and the size of Cayley-Dickinson algebras 1=real, 2=imaginary, 4=quaternions, 8=octonions; and the continuing set... 16=hexagonions, 32=trigintaduonions, .when n<5. n squared and 2 to the power n start to diverge quickly starting with n=5. As well, the Cayley-Dickinson algebras of m=32, 64, 128 start to lose even more structure associated with the algebras that we are 'familiar with.' For example, once you reach the Cayley-Dickinson algebras with at least 32 operators, you start losing the rules of associativity in addition, and therefore, I believe the it's unlikely that the higher Cayley-Dickinson algebras will have corresponding forces of nature; but just because we're not familiar with it, doesn't mean it doesn't exist.

So, while I strongly believe that we will be able to eventually predict the color, masses, and charge of quarks, electrons and neutrinos, I'm still not sure whether there are more forces beyond the four known forces. There are real reason to believe that we are missing a really strong force is that we can not predict the masses or charges of tauons/muons/electrons, neutrinos and all of the quarks, but we need to see whether this new force can predict the masses, and if not, then we can start looking into forces beyond the SU(4) force discussed above.

While the charge of the neutron, quark and electron seem way too coincidental, there are reasons to believe that electrons are point-particles (rather than composite particles). For example, QED (and its ability to predict electron-photon interacts down to the ~8th decimal) assumes that electrons are point particles. But still, why do quarks have -1/3, and 2/3 of the charge of an electron. We are left with the feeling that there is still something very fundamental that we don't understand about the forces of nature, the cause of the rest mass of particles, and the cause of the charge of particles.

I'd like to conclude with the following open questions:

(1)
Why are there only 4 dimensions to space-time? Is this related to the fact that
there are only 4 forces of nature, which is most likely due to the fact that
there are only 4 normed division
algebras (over the real
numbers...gravity, over the imaginary numbers...E&M, over the
quaternions...weak nuclear, and over the octonions...strong nuclear.)

(2) Are more than 4-dimensions of space-time not
possible because there are only 4 normed division algebras?

(3) Is the reason that there are 3 surface
dimensions and 1 radial dimension (to our 4D sphere) due to the face that 3 of
the forces of nature of space-time reversible (gravity, E&M, and strong
nuclear) and one of the forces is space-time irreversible (weak nuclear)?

## Saturday, November 13, 2010

### Humans vs. the Sun (Sun wins for entropy production & loses for complexity)

Here's a fact that shouldn't be over-looked in all of the blogs I've written.

The amount of exergy destruction (and hence increase in entropy) from life on earth (plants, animals, etc.) is minuscule compared with of the Sun or any other star.

We shouldn't forget that, right now, what were doing on Earth to increase the entropy of the universe is negligible compared to the rest of the processes in the solar system not associated with life.

Life: Score of 10

Sun: Score of 1,000,000,000,00...

And I want to make this point because, while I believe that humans should try to cultivate other planets as a means of increasing the entropy of the universe, it will take a long time before we can ever match the entropy production capability of our Sun.

And therefore, none of the reductionist philosophy I've been discussing should in anyway be construed to suggest that it's okay to violate human rights (to life, liberty, and the pursuit of happiness) as a means to an end (where the end is more entropy production.)

If anything, I'm hoping to show that what we call life is unique and different compared with other nonlinear phenomena because of its recursive, replicating capability.

While I've discussed the possibility of self-replicating structures within the Sun (using the exergy gradient, and carbon as a catalyst), I don't think that such structures (if they exist) are anywhere as complex as the life forms we see on Earth.

Score of degree of complexity / self-reference-replication

Life on Earth: 1,000,000,000,00...

Sun: 0-5

The amount of exergy destruction (and hence increase in entropy) from life on earth (plants, animals, etc.) is minuscule compared with of the Sun or any other star.

We shouldn't forget that, right now, what were doing on Earth to increase the entropy of the universe is negligible compared to the rest of the processes in the solar system not associated with life.

Life: Score of 10

Sun: Score of 1,000,000,000,00...

And I want to make this point because, while I believe that humans should try to cultivate other planets as a means of increasing the entropy of the universe, it will take a long time before we can ever match the entropy production capability of our Sun.

And therefore, none of the reductionist philosophy I've been discussing should in anyway be construed to suggest that it's okay to violate human rights (to life, liberty, and the pursuit of happiness) as a means to an end (where the end is more entropy production.)

If anything, I'm hoping to show that what we call life is unique and different compared with other nonlinear phenomena because of its recursive, replicating capability.

While I've discussed the possibility of self-replicating structures within the Sun (using the exergy gradient, and carbon as a catalyst), I don't think that such structures (if they exist) are anywhere as complex as the life forms we see on Earth.

Score of degree of complexity / self-reference-replication

Life on Earth: 1,000,000,000,00...

Sun: 0-5

## Thursday, November 11, 2010

### The source of exergy in the early universe

As I've discussed in earlier blogs, life requires a source of exergy in order to maintain its structure. And this source of exergy must be far-from equilibrium, i.e. life can't survive solely on a linear temperature gradient. Typically, the exergy that life uses comes from chemical compositions of molecules far from their equilibrium values in the environment. For example, bacteria can convert hydrocarbons and oxygen to carbon dioxide and water because the composition of hydrocarbons (such as in the Gulf of Mexico) is greater than that which would be predicted by thermodynamic equilibrium in the presence of oxygen.

So, what was the source of exergy in the early universe?

There is a lot of debate in this area because it's really difficult to obtain information about the origins of life here on Earth.

Luckily, discussions about the source of exergy in the early universe immediately remove the discussion about whether the Earth's first life forms started here on Earth or started else where and somehow were frozen and survived the trip to Earth.

Possible sources of exergy in the early universe include: 1) sunlight 2) chemicals from underwater vents that are out-of-equilibrium with the ocean 3) chemicals produced inside of lightning bolts that are then out-of-equilibrium with their composition in the environment.

As for chemicals produced in lightning bolts, there have already been multiple experiments showing that complex chemicals (such as amino acids) can be formed in electrical discharges when the gases inside the discharge are similar to those in the early universe (perhaps CO2, CH4, N2/NH3,H2S). The Miller–Urey experiment is just one example.

Lightning is a source of exergy far-from equilibrium. Lightning on Earth today is due to charge imbalance between the solid earth and the atmosphere. Most of this charge imbalance is due to charges carried down with precipitation. Rain and snow fall are due to the non-equilibrium composition of water vapor in the atmosphere, which is in turn due to sunlight vaporizing the oceans so that the composition of water in vapor in the atmosphere can be greater than that expected from thermal equilibrium at the temperature of the atmosphere.

In order words: Sunlight (chemical/thermal exergy) -->> water vapor non-equilibrium (Gravitational potential energy) -->> charge build up (electrochemical exergy) -->> lightning (high temperature thermal exergy) -->> formation of molecules in amounts greater than expected at ambient temperature conditions(chemical exergy)

Many of the non-equilibrium molecules created in the atmosphere during a lightning strike will eventually fall into the ocean.For example, some of the radical species (such as CH3) produced in the lightning strike can combine with other gases species in the atmosphere to produce molecules (like amino acids) that prefer to be in the aqueous state in the ocean.

So, if amino acids were the building blocks for early life forms, what were the food sources? What food source was the reducing agent and which food source was the oxidizing agent? (There probably wasn't oxygen in the atmosphere when life first began.) And how did early life forms create a spatial gradient in the reducing/oxidizing chemicals in order to generate work?

If there's no spatial gradient in the oxidizing and reducing agents, then there's no way to generate power in a mitochondria (or a fuel cell for that matter.) Did the first mitochondria rely on local gradients of chemical compositions?

This leads me to one of the main questions I would like to know:

What was the first source of work? I.e. at what point in time, did a cell use stored chemical exergy (perhaps a precursor to ATP) to separate a well-mixed solution of reducing and oxidizing chemicals so that it could derive more stored chemical exergy via an electrochemical reaction across one of its membranes? Where did the ability to store chemical exergy come from?

I have yet to come across a set of differential equations that yields a solution in which chemical exergy is stored for later use to yield even more chemical exergy.

It is well known that non-linear differential equations with cubic terms have the capability of yielding solutions with limit cycles, such as Van der Pol's equation. However, life is much more complicated than just a limit cycle. Limit cycles don't have the capability of self-replication, and they most definitely don't have the capability of storing exergy for later use to derive more exergy. Neither do strange attractors.

This leads me to the conjecture: Metabolism and replication (as opposed to other forms of nonlinear behavior, such as Rayleigh-Benard convection cells or weather patterns) require that the set of differential equations has a set of symmetry generators (X1, X2, X3,...Xn) such that the algebra describing the Xn's is powerful enough that Godel's incompleteness theorem applies to the group and its algebra.

This is just a conjecture, but what I would like to do is to change the focus of the debate on the first life forms on Earth from a discussion of particular self-replicating chemicals to a discussion of: how does the set of differential equations yield solutions (or attractors) that can yield phenomena such as metabolism and replication? (i.e. can storage of exergy or information appear from the attractors found in nonlinear differential equations with sources of exergy?)

It appears to me that there must be solutions of differential equations that are even more complex that limit cycle or strange attractors. In a previous blog ("Meaning of Life..."), I suggested that the set of differential equations must have enough symmetry generators that the algebra describing the group is powerful enough that Godel's incompleteness applies. If it is powerful enough, then there's forms of recursion such as the operators/generators can map back into themselves. It's the attractors of the differential equations that are replicating...not just the molecules. And this is why I think that it's more important to find the differential equations that produce self-replicating attractors than it is to find the actual chemical species involved in replication. This way, we can solve the chicken-and-the-egg paradox of self-replicating molecules. What you need is for the differential equations to contain self-replicating attractors (and this requires both a source of exergy, the self-replicating chemical species, and perhaps some other requirements.)

(I think that the nonlinear differential equations that have such a self-replicating attractor can not be solved, i.e. they aren't integrable.)

So, the following questions remain in my mind:

What was the source of exergy in the early Earth's history?

Can life be described by a set of self-replicating attractors within nonlinear differential equations?

So, what was the source of exergy in the early universe?

There is a lot of debate in this area because it's really difficult to obtain information about the origins of life here on Earth.

Luckily, discussions about the source of exergy in the early universe immediately remove the discussion about whether the Earth's first life forms started here on Earth or started else where and somehow were frozen and survived the trip to Earth.

Possible sources of exergy in the early universe include: 1) sunlight 2) chemicals from underwater vents that are out-of-equilibrium with the ocean 3) chemicals produced inside of lightning bolts that are then out-of-equilibrium with their composition in the environment.

As for chemicals produced in lightning bolts, there have already been multiple experiments showing that complex chemicals (such as amino acids) can be formed in electrical discharges when the gases inside the discharge are similar to those in the early universe (perhaps CO2, CH4, N2/NH3,H2S). The Miller–Urey experiment is just one example.

Lightning is a source of exergy far-from equilibrium. Lightning on Earth today is due to charge imbalance between the solid earth and the atmosphere. Most of this charge imbalance is due to charges carried down with precipitation. Rain and snow fall are due to the non-equilibrium composition of water vapor in the atmosphere, which is in turn due to sunlight vaporizing the oceans so that the composition of water in vapor in the atmosphere can be greater than that expected from thermal equilibrium at the temperature of the atmosphere.

In order words: Sunlight (chemical/thermal exergy) -->> water vapor non-equilibrium (Gravitational potential energy) -->> charge build up (electrochemical exergy) -->> lightning (high temperature thermal exergy) -->> formation of molecules in amounts greater than expected at ambient temperature conditions(chemical exergy)

Many of the non-equilibrium molecules created in the atmosphere during a lightning strike will eventually fall into the ocean.For example, some of the radical species (such as CH3) produced in the lightning strike can combine with other gases species in the atmosphere to produce molecules (like amino acids) that prefer to be in the aqueous state in the ocean.

So, if amino acids were the building blocks for early life forms, what were the food sources? What food source was the reducing agent and which food source was the oxidizing agent? (There probably wasn't oxygen in the atmosphere when life first began.) And how did early life forms create a spatial gradient in the reducing/oxidizing chemicals in order to generate work?

If there's no spatial gradient in the oxidizing and reducing agents, then there's no way to generate power in a mitochondria (or a fuel cell for that matter.) Did the first mitochondria rely on local gradients of chemical compositions?

This leads me to one of the main questions I would like to know:

What was the first source of work? I.e. at what point in time, did a cell use stored chemical exergy (perhaps a precursor to ATP) to separate a well-mixed solution of reducing and oxidizing chemicals so that it could derive more stored chemical exergy via an electrochemical reaction across one of its membranes? Where did the ability to store chemical exergy come from?

I have yet to come across a set of differential equations that yields a solution in which chemical exergy is stored for later use to yield even more chemical exergy.

It is well known that non-linear differential equations with cubic terms have the capability of yielding solutions with limit cycles, such as Van der Pol's equation. However, life is much more complicated than just a limit cycle. Limit cycles don't have the capability of self-replication, and they most definitely don't have the capability of storing exergy for later use to derive more exergy. Neither do strange attractors.

This leads me to the conjecture: Metabolism and replication (as opposed to other forms of nonlinear behavior, such as Rayleigh-Benard convection cells or weather patterns) require that the set of differential equations has a set of symmetry generators (X1, X2, X3,...Xn) such that the algebra describing the Xn's is powerful enough that Godel's incompleteness theorem applies to the group and its algebra.

This is just a conjecture, but what I would like to do is to change the focus of the debate on the first life forms on Earth from a discussion of particular self-replicating chemicals to a discussion of: how does the set of differential equations yield solutions (or attractors) that can yield phenomena such as metabolism and replication? (i.e. can storage of exergy or information appear from the attractors found in nonlinear differential equations with sources of exergy?)

It appears to me that there must be solutions of differential equations that are even more complex that limit cycle or strange attractors. In a previous blog ("Meaning of Life..."), I suggested that the set of differential equations must have enough symmetry generators that the algebra describing the group is powerful enough that Godel's incompleteness applies. If it is powerful enough, then there's forms of recursion such as the operators/generators can map back into themselves. It's the attractors of the differential equations that are replicating...not just the molecules. And this is why I think that it's more important to find the differential equations that produce self-replicating attractors than it is to find the actual chemical species involved in replication. This way, we can solve the chicken-and-the-egg paradox of self-replicating molecules. What you need is for the differential equations to contain self-replicating attractors (and this requires both a source of exergy, the self-replicating chemical species, and perhaps some other requirements.)

(I think that the nonlinear differential equations that have such a self-replicating attractor can not be solved, i.e. they aren't integrable.)

So, the following questions remain in my mind:

What was the source of exergy in the early Earth's history?

Can life be described by a set of self-replicating attractors within nonlinear differential equations?

## Wednesday, November 3, 2010

### Electricity backed currency: The new gold standard

I've mentioned in previous posts that I believe that the US currency should be grounded in something real that has value.

Before 1971, our currency was tied to an amount of gold, which seems silly right now because you can't do anything with gold, other than wear it and look cool.

Since 1971, we've been living in a world in which our currency isn't grounded. The Federal Reserve can pretty much just print money whenever it wants. Though, the people at the Fed should have a healthy amount of fear of being pitchforked by the masses if they were to print money to the extreme.

There's a certain amount of unease that I feel because the printing of money isn't tied to anything that's grounded.

This has made a lot of other people uneasy as well.

For example, check out:

http://www.energybackedmoney.com/

This website mentions that famous engineers, such as Henry Ford and Thomas Edison, were in favor of backing the US currency in energy, such as electricity. Though, it seems as if the idea of energy backed currency has never really gotten off the ground, kind of like how a lot of people question who wrote the plays that are attributed to Shakespeare, but none of us are picketing bookstores demanding that the true author's name be placed on the books. (Though, after I wrote this post, I found a link to a new movie called Anonymous on this topic. I'm definitely going to have to see this movie, even though I'm already ~95% sure that Edward De Vere wrote 'Hamlet.')

The "Energy Backed Money" website does a good job of going through the details of the plan, so I suggest you read through it if this is a topic that interests you. Though, here's a quick summary:

The government would maintain the price of electricity between a certain range (such as 12 to 15 cents per kilawatt-hour). It would also guarantee that its currency could always be exchanged for a certain amount of electricity (such as 20 cents per kilawatt*hour.) If the economy is doing well and the price of electricity starts dropping below $0.12/kWh, then the government can do one of two things to maintain the average price of electricity: either lower taxes and print money to make up for the loss of income, or increase the size of government spending.

If, however, the economy is not growing, and in fact contracting, then the government has to keep the price of electricity at $0.12/kW-hr even though the price is starting to increase in this contracting society. To do this, the government has to take money out of the economy, either by increasing taxes or reducing government spending.

Notice how this is exactly the opposite of Keynesian economics. According to Keynes, the government should spend money during a recession and reducing spending during a economic boom. This is exactly the opposite of what we should be doing.

Stimulus spending during a recession is therefore the worse thing you can do during a recession. Instead, the government needs to remove money from the economy because there's actually less electricity available because it became either: more difficult to make it or the electricity was being wasted too much. Keysenian economics tells us to waste the electricity even more, but this is silly. If electricity (or gasoline, natural gas, etc.) is more expensive, then we need to be cutting back in our use of it until there's a technological breakthrough that lowers the price of electricity.

Notice also that the idea of lowering taxes during a recession is equally stupid. If you lower taxes, then you will go into debt, which will force you to print money, but then the electricity prices just keep on going up, and the recession continues. Here's an analogy I created.

Imagine that you're the sound man (a roadie) for a famous band. You notice that some feedback is developing between the singer and the microphone. So, you turn one of your knobs in order to increase the amount of negative feedback.

But it turns out, that you're actually turning the knob for positive feedback, and a horrible screeching sound it heard through out the venue. At first, you think that you didn't turn the knob far enough, so you turn it some more...but now the noise is even worse. The singer is now no longer anywhere near the microphone, but the noise is still there. Eventually, you momentarily kill the power to the stage and let the singer back to the microphone, but it starts up again.

We didn't realize that we've been turning the wrong knob. We've been turning the positive feedback dial instead of the negative feedback dial!

Right now, both political parties are living in a dream world. The solution to solving the recession is neither tax cuts or wasteful stimulus.

The solution to the current recession is some combination of 1) reduced government spending, 2) waiting for a technological break-through, and 3) selling government capital. All the while, we can't print more money during a recession. Printing money during a recession is like turning the positive feedback knob.

Electricity backed currency is what keeps the fire to the feet of the politicians and members of the Federal Reserve. They can only print money when the economy is growing. And if the economy is shrinking, then they have to raise taxes or reduce government spending.

Because politicians don't like having to either raise taxes or reduce government spending, then it forces them to keep the economy growing. And I believe that the government has a large role in keeping the economy growing, such as investing in cheap sources of energy, ensuring a national defense, maintaining a strong legal system, enforcing patent rights, enforcing pollutions laws, and maintaining critical infrastructure.

So, what I'm interested in is: how do we get to the point at which electricity backed currency is actually a possibility?

Right now, the problems are: 1) electricity is not a freely exchangeable item; 2) the cost of electricity is different from state-to-state; 3) it's difficult to store large quantities of it right now.

I think that the main goal of the Department of Energy should be to address the problems that are keeping us from realizing an electricity backed currency.

We need to:

1) Figure out how people can get paid for selling electricity, and vice versa, figure out how people can buy electricity even if they aren't at home. This needs to happen soon because I want to have an electric car, and I expect to pay people for using electricity to charge the car when I'm away from home. I also expect to be able to sell that electricity in the car's battery if there's a brown-out in the city.

2) Find cheap ways to connect the West Coast's and East Coast's electricity grids (and Texas's grid). We need to eliminate the price difference between the different parts of the country. Perhaps, the price of superconducting wires will drop to the point at which we can lay these wires from one coast to the next.

3) Invest in electricity storage technologies that match the scale of its use. I mean large scale energy storage, like what we do with natural gas for the winter or for gasoline at the Strategic Petroleum Reserve.

So, in summary, in a recession:

Don't increase government spending

Don't decrease taxes (unless you more than compensate for it with decreased spending)

Don't increase the size of the petroleum reserve

Do cut wasteful government spending

Do sell government rights to oil/natural gas/minerals/coal

Do sell un-needed reserves of gold and silver

John M. Keynes had it completely wrong and so do most politicians. We've been turning the wrong knob this whole time. We've been increasing positive feedback while all the while thinking that we were turning the negative feedback knob.

Before 1971, our currency was tied to an amount of gold, which seems silly right now because you can't do anything with gold, other than wear it and look cool.

Since 1971, we've been living in a world in which our currency isn't grounded. The Federal Reserve can pretty much just print money whenever it wants. Though, the people at the Fed should have a healthy amount of fear of being pitchforked by the masses if they were to print money to the extreme.

There's a certain amount of unease that I feel because the printing of money isn't tied to anything that's grounded.

This has made a lot of other people uneasy as well.

For example, check out:

http://www.energybackedmoney.com/

This website mentions that famous engineers, such as Henry Ford and Thomas Edison, were in favor of backing the US currency in energy, such as electricity. Though, it seems as if the idea of energy backed currency has never really gotten off the ground, kind of like how a lot of people question who wrote the plays that are attributed to Shakespeare, but none of us are picketing bookstores demanding that the true author's name be placed on the books. (Though, after I wrote this post, I found a link to a new movie called Anonymous on this topic. I'm definitely going to have to see this movie, even though I'm already ~95% sure that Edward De Vere wrote 'Hamlet.')

The "Energy Backed Money" website does a good job of going through the details of the plan, so I suggest you read through it if this is a topic that interests you. Though, here's a quick summary:

The government would maintain the price of electricity between a certain range (such as 12 to 15 cents per kilawatt-hour). It would also guarantee that its currency could always be exchanged for a certain amount of electricity (such as 20 cents per kilawatt*hour.) If the economy is doing well and the price of electricity starts dropping below $0.12/kWh, then the government can do one of two things to maintain the average price of electricity: either lower taxes and print money to make up for the loss of income, or increase the size of government spending.

If, however, the economy is not growing, and in fact contracting, then the government has to keep the price of electricity at $0.12/kW-hr even though the price is starting to increase in this contracting society. To do this, the government has to take money out of the economy, either by increasing taxes or reducing government spending.

Notice how this is exactly the opposite of Keynesian economics. According to Keynes, the government should spend money during a recession and reducing spending during a economic boom. This is exactly the opposite of what we should be doing.

Stimulus spending during a recession is therefore the worse thing you can do during a recession. Instead, the government needs to remove money from the economy because there's actually less electricity available because it became either: more difficult to make it or the electricity was being wasted too much. Keysenian economics tells us to waste the electricity even more, but this is silly. If electricity (or gasoline, natural gas, etc.) is more expensive, then we need to be cutting back in our use of it until there's a technological breakthrough that lowers the price of electricity.

Notice also that the idea of lowering taxes during a recession is equally stupid. If you lower taxes, then you will go into debt, which will force you to print money, but then the electricity prices just keep on going up, and the recession continues. Here's an analogy I created.

Imagine that you're the sound man (a roadie) for a famous band. You notice that some feedback is developing between the singer and the microphone. So, you turn one of your knobs in order to increase the amount of negative feedback.

But it turns out, that you're actually turning the knob for positive feedback, and a horrible screeching sound it heard through out the venue. At first, you think that you didn't turn the knob far enough, so you turn it some more...but now the noise is even worse. The singer is now no longer anywhere near the microphone, but the noise is still there. Eventually, you momentarily kill the power to the stage and let the singer back to the microphone, but it starts up again.

We didn't realize that we've been turning the wrong knob. We've been turning the positive feedback dial instead of the negative feedback dial!

Right now, both political parties are living in a dream world. The solution to solving the recession is neither tax cuts or wasteful stimulus.

The solution to the current recession is some combination of 1) reduced government spending, 2) waiting for a technological break-through, and 3) selling government capital. All the while, we can't print more money during a recession. Printing money during a recession is like turning the positive feedback knob.

Electricity backed currency is what keeps the fire to the feet of the politicians and members of the Federal Reserve. They can only print money when the economy is growing. And if the economy is shrinking, then they have to raise taxes or reduce government spending.

Because politicians don't like having to either raise taxes or reduce government spending, then it forces them to keep the economy growing. And I believe that the government has a large role in keeping the economy growing, such as investing in cheap sources of energy, ensuring a national defense, maintaining a strong legal system, enforcing patent rights, enforcing pollutions laws, and maintaining critical infrastructure.

So, what I'm interested in is: how do we get to the point at which electricity backed currency is actually a possibility?

Right now, the problems are: 1) electricity is not a freely exchangeable item; 2) the cost of electricity is different from state-to-state; 3) it's difficult to store large quantities of it right now.

I think that the main goal of the Department of Energy should be to address the problems that are keeping us from realizing an electricity backed currency.

We need to:

1) Figure out how people can get paid for selling electricity, and vice versa, figure out how people can buy electricity even if they aren't at home. This needs to happen soon because I want to have an electric car, and I expect to pay people for using electricity to charge the car when I'm away from home. I also expect to be able to sell that electricity in the car's battery if there's a brown-out in the city.

2) Find cheap ways to connect the West Coast's and East Coast's electricity grids (and Texas's grid). We need to eliminate the price difference between the different parts of the country. Perhaps, the price of superconducting wires will drop to the point at which we can lay these wires from one coast to the next.

3) Invest in electricity storage technologies that match the scale of its use. I mean large scale energy storage, like what we do with natural gas for the winter or for gasoline at the Strategic Petroleum Reserve.

So, in summary, in a recession:

Don't increase government spending

Don't decrease taxes (unless you more than compensate for it with decreased spending)

Don't increase the size of the petroleum reserve

Do cut wasteful government spending

Do sell government rights to oil/natural gas/minerals/coal

Do sell un-needed reserves of gold and silver

John M. Keynes had it completely wrong and so do most politicians. We've been turning the wrong knob this whole time. We've been increasing positive feedback while all the while thinking that we were turning the negative feedback knob.

## Saturday, October 30, 2010

### Waste to Energy -- Many different routes, most are economic

I'm shocked that we still rely on landfills for 55% of the roughly 4.4 lbs of garbage we generate on average here in the US. Roughly 33% is recycled and only 12% is combusted. Of the 12% combusted, only some of the heat released in the combusted is captured at the 87 waste-to-energy plants in the US. Only 2.7 GigaWatts of electricity is generated at these power-plants. And this is only 0.4% of the total average power generation rate in the US.

So what's keeping us back from generating more electricity from municipal solid wastes?

We need electricity, and we will be needing even more of it the future (to drive cars, to power robots, and to stream HD everything.)

There is understandably some concern with incinerating trash. Whenever you have partial combustion of hydrocarbons with chlorine species present. The chlorine comes from PVC wastes, such as some clothing, pipes, and portable electronics.

When the waste is combusted at too low of a temperature, there is a chance of forming dioxins. Dioxins can cause significant harm to the body, effecting both the physical appearance and neurological system. Dioxons can bio-accumulate, so unfortunately we possibly can injest dioxins via the food we eat.

So, how can we generate electricity from waste without generating dioxins (or other forms of air pollution, such as particulates, heavy metal vapors, NOx, and SOx)

First, we could combust the waste and then inject the gases underground, as is proposed for coal power plants. Or we could figure out how to remove all of the pollution from the gas stream. Luckily, most of the pollutant gases are acidic, and so they can be captured by flowing the flue gas through a basic solution, such as a mixture of water and limestone/sodium hydroxide.

There are also ways of minimizing the generation of certain of the pollutants. For example, if the waste is combusted in an environment of pure oxygen (rather than air), then the production of NOx can be eliminated. Also, if the combustion occurs in a pure oxygen stream, then it is easier to obtain temperatures that are high enough such that dioxins will not form. At temperatures above 1200 C and in the presence of significant amounts of oxygen, the production of dioxin is negligible.

Another option is to add limestone (or sodium hydroxide) directly into the combustion environment, in which case the calcium or or sodium can capture the chlorine, keeping it from entering the flue gas stream in the first place.

So, why is waste incineration not taking off in the US?

Right now, I think that there is a fear of building power plants that could cause any environmental damage. It's not completely a question of economics.

For example, NYC is paying on the order of $90 per ton of waste generated in the big apple. At this price, and at an electricity price over $0.15/kWhr, any of the processes described above make sense economically, even sequestering the flue gas miles underground.

We need to get over our fear of everything just because we have a history of environmental damage in the past. We have to compare the possibility of environmental damage due to incineration with the very real environmental damage due to landfilling, such as contamination of groundwater and/or aquifers by leakage or the harbouring of diseases.

We need to overcome this fear really quickly because there are some major advantages to incinerating waste, such recovering precious metals leftover in the ash. The price of precious metals is increasing, and will continue to increase in the future as demand increases and supply decreases (especially if China decreases the amount of precious metals exports.)

So, I'll continue more in another blog to go other many of the different routes, and the estimates of the economic viability of the processes.

So what's keeping us back from generating more electricity from municipal solid wastes?

We need electricity, and we will be needing even more of it the future (to drive cars, to power robots, and to stream HD everything.)

There is understandably some concern with incinerating trash. Whenever you have partial combustion of hydrocarbons with chlorine species present. The chlorine comes from PVC wastes, such as some clothing, pipes, and portable electronics.

When the waste is combusted at too low of a temperature, there is a chance of forming dioxins. Dioxins can cause significant harm to the body, effecting both the physical appearance and neurological system. Dioxons can bio-accumulate, so unfortunately we possibly can injest dioxins via the food we eat.

So, how can we generate electricity from waste without generating dioxins (or other forms of air pollution, such as particulates, heavy metal vapors, NOx, and SOx)

First, we could combust the waste and then inject the gases underground, as is proposed for coal power plants. Or we could figure out how to remove all of the pollution from the gas stream. Luckily, most of the pollutant gases are acidic, and so they can be captured by flowing the flue gas through a basic solution, such as a mixture of water and limestone/sodium hydroxide.

There are also ways of minimizing the generation of certain of the pollutants. For example, if the waste is combusted in an environment of pure oxygen (rather than air), then the production of NOx can be eliminated. Also, if the combustion occurs in a pure oxygen stream, then it is easier to obtain temperatures that are high enough such that dioxins will not form. At temperatures above 1200 C and in the presence of significant amounts of oxygen, the production of dioxin is negligible.

Another option is to add limestone (or sodium hydroxide) directly into the combustion environment, in which case the calcium or or sodium can capture the chlorine, keeping it from entering the flue gas stream in the first place.

So, why is waste incineration not taking off in the US?

Right now, I think that there is a fear of building power plants that could cause any environmental damage. It's not completely a question of economics.

For example, NYC is paying on the order of $90 per ton of waste generated in the big apple. At this price, and at an electricity price over $0.15/kWhr, any of the processes described above make sense economically, even sequestering the flue gas miles underground.

We need to get over our fear of everything just because we have a history of environmental damage in the past. We have to compare the possibility of environmental damage due to incineration with the very real environmental damage due to landfilling, such as contamination of groundwater and/or aquifers by leakage or the harbouring of diseases.

We need to overcome this fear really quickly because there are some major advantages to incinerating waste, such recovering precious metals leftover in the ash. The price of precious metals is increasing, and will continue to increase in the future as demand increases and supply decreases (especially if China decreases the amount of precious metals exports.)

So, I'll continue more in another blog to go other many of the different routes, and the estimates of the economic viability of the processes.

## Sunday, October 24, 2010

### On Recycling

I've been a huge fan of recycling 'garbage' since I was a kid, making money recycling newspapers and aluminum cans. I still recycle cans, bottles, and newspapers today, even though I don't get paid any longer.

Where I currently live doesn't even have a collection site, so I have to take them to the local university that recycles.

From an traditional economic point of view, what I'm doing right now can not be justified, but it goes along with my last blog regarding humans as "altruistic punishers." We don't like people who are gaming the system.

Many of us hate the idea of throwing away items that could be recycled because we think in terms of cycles (one person's waste is another person's gold.)

For living creatures, one creature's decaying carcass is another's food source.

But on the other hand, the goal is to build wealth (so as to increase the entropy of the universe, i.e. bring the universe to equilibrium at a faster rate.)

If recycling a bottle consumes more exergy (wealth) and creates more pollution that making the bottle from scratch, then there is a good reason not to recycle the bottle.

We have to use our heads and really determine when it makes sense to recycle and when it doesn't make sense.

For me, I'm more apt to recycle, if solely for the fact that while I'm recycling I'm thinking about ways to build recycling power plants (for converting plastic bottles, and other non-easily recyclable products) into electricity in ways that don't do more damage to the environment that the landfill they would go into.

Plastics contain chlorine molecules. In partial oxidizing environments, chlorine-containing carcinogens can be formed. We should be looking into ways of combusting or gasifying municipal solid waste with basic components (such as sodium hydroxide) that can grab the chlorine molecules and keep them from getting into the gas phase. I think that companies like Wheelabrator use limestone as a base to react with the acidic chlorine species. (Though, I'm not positive about this.)

Whether the power plant using combustion or partial oxidation (i.e. gasification), I'm in favor of expanding the use of waste-to-electricity in the US as long as there is positive Return on Investment.

Where I currently live doesn't even have a collection site, so I have to take them to the local university that recycles.

From an traditional economic point of view, what I'm doing right now can not be justified, but it goes along with my last blog regarding humans as "altruistic punishers." We don't like people who are gaming the system.

Many of us hate the idea of throwing away items that could be recycled because we think in terms of cycles (one person's waste is another person's gold.)

For living creatures, one creature's decaying carcass is another's food source.

But on the other hand, the goal is to build wealth (so as to increase the entropy of the universe, i.e. bring the universe to equilibrium at a faster rate.)

If recycling a bottle consumes more exergy (wealth) and creates more pollution that making the bottle from scratch, then there is a good reason not to recycle the bottle.

We have to use our heads and really determine when it makes sense to recycle and when it doesn't make sense.

For me, I'm more apt to recycle, if solely for the fact that while I'm recycling I'm thinking about ways to build recycling power plants (for converting plastic bottles, and other non-easily recyclable products) into electricity in ways that don't do more damage to the environment that the landfill they would go into.

Plastics contain chlorine molecules. In partial oxidizing environments, chlorine-containing carcinogens can be formed. We should be looking into ways of combusting or gasifying municipal solid waste with basic components (such as sodium hydroxide) that can grab the chlorine molecules and keep them from getting into the gas phase. I think that companies like Wheelabrator use limestone as a base to react with the acidic chlorine species. (Though, I'm not positive about this.)

Whether the power plant using combustion or partial oxidation (i.e. gasification), I'm in favor of expanding the use of waste-to-electricity in the US as long as there is positive Return on Investment.

### The Origin of Wealth

I've been reading the book "The Origin of Wealth" by Eric D. Beinhocker.

In general, I find it to be a great read.

I'm really excited that there are economists out there who are trying to actually understand how humans build economies. The author makes a strong case that dropping assumptions of perfect rationality is a must in any economic theory.

I also like see that economists are now using computer games (simulations) to predict general economic trends. While the computer games must eventually be replaced with mathematical formulas, I think that the "Sugarland" simulations do a great job of helping us see the world anew, and without that "seeing the world anew" we don't know where to start to be able to come up with equations that closely approximate human behavior.

I love his chapter on behavioral economics, in which he reminds us that Adam Smith wrote "A Theory of Moral Sentiments" before "On the Wealth of Nations" and that Adam Smith did realize the complexity of human behaviors. We aren't just selfish. We are also social creatures. We sometimes are "altruistic punishers", i.e. we will go out of our way to punish those who we think are gaming the system or free-riding, even if it's not in our economic self-interest.

I also love the chapter on game theory. There's a graph on page 232 that shows which "game theory strategy" dominates versus time if the "game theory strategy" can evolve.

The strategies start simple (such as "always trust the opponent" or "always distrust the opponent"), but they evolve to higher levels (such as "start off by trusting, but if it opponent screws you, then screw them back.)

Eventually, they evolve either further to "start of by trusting, but if it opponent screws you, then screw them back, but then watch them for 'N' moves to see if they go back to being nice."

This is how the simulations worked out, there was no design for this to happen, it just happened! If the equations allow for differentiation, selection, & amplification. I think that we should learn from this example, but we should also remember that there is no optimal solution to the game theory problem. Ultimately, there's no way to know what is the optimal strategy because the rules of the game are continuously changing; the Earth is not stagnant. There is no right political philosophy. One philosophy may be better (on average) during certain times, but others may be better (on average) during other times throughout history. But since there's no way to test all political philosophies at the same time, there's no way to argue that one philosophy is better than another. All we can say is that right now, a particular philosophy has more adherents that another philosophy.

I think that Eric Beinhocker gets most things right, but he's a little off with his idea of "Fit Order." He states on pg 316, "All wealth is created by thermodynamically irreversible, entropy-lowering processes. The act of creating wealth is an act of creating order, but not all order is wealth creating."

He seems to miss the fact that the point of life is to increase the entropy of the universe. The meaning of "order" is confusing, and therefore I personally try to stay away from it. However, entropy is a well-defined term (both for systems in equilibrium and out of equilibrium.)

The structure we see due to life is due to the structure & symmetry inside of the equations of nonlinear, non-equilibrium thermodynamics.

Wealth, as I understand it, is related to the capability to do work (measured in kJ) and it is related to the fact that life has the ability to store available work (such as chemical exergy) to overcome activation barriers that would take too long to overcome without the stored work, and in the end, the entropy of the universe increases faster than it would have without the life structure. The structure (or fit order as Eric calls it) is the means to the end, not the end in of itself.

When you defined wealth in terms of exergy, you can avoid the problems in Eric Beinhocker's definition of wealth, and you can avoid this idea of fit order (neither of these terms are measurable.)

So, in conclusion, I'm a big fan of the book, but I have a problem with his definition of wealth as "fit order."

In general, I find it to be a great read.

I'm really excited that there are economists out there who are trying to actually understand how humans build economies. The author makes a strong case that dropping assumptions of perfect rationality is a must in any economic theory.

I also like see that economists are now using computer games (simulations) to predict general economic trends. While the computer games must eventually be replaced with mathematical formulas, I think that the "Sugarland" simulations do a great job of helping us see the world anew, and without that "seeing the world anew" we don't know where to start to be able to come up with equations that closely approximate human behavior.

I love his chapter on behavioral economics, in which he reminds us that Adam Smith wrote "A Theory of Moral Sentiments" before "On the Wealth of Nations" and that Adam Smith did realize the complexity of human behaviors. We aren't just selfish. We are also social creatures. We sometimes are "altruistic punishers", i.e. we will go out of our way to punish those who we think are gaming the system or free-riding, even if it's not in our economic self-interest.

I also love the chapter on game theory. There's a graph on page 232 that shows which "game theory strategy" dominates versus time if the "game theory strategy" can evolve.

The strategies start simple (such as "always trust the opponent" or "always distrust the opponent"), but they evolve to higher levels (such as "start off by trusting, but if it opponent screws you, then screw them back.)

Eventually, they evolve either further to "start of by trusting, but if it opponent screws you, then screw them back, but then watch them for 'N' moves to see if they go back to being nice."

This is how the simulations worked out, there was no design for this to happen, it just happened! If the equations allow for differentiation, selection, & amplification. I think that we should learn from this example, but we should also remember that there is no optimal solution to the game theory problem. Ultimately, there's no way to know what is the optimal strategy because the rules of the game are continuously changing; the Earth is not stagnant. There is no right political philosophy. One philosophy may be better (on average) during certain times, but others may be better (on average) during other times throughout history. But since there's no way to test all political philosophies at the same time, there's no way to argue that one philosophy is better than another. All we can say is that right now, a particular philosophy has more adherents that another philosophy.

I think that Eric Beinhocker gets most things right, but he's a little off with his idea of "Fit Order." He states on pg 316, "All wealth is created by thermodynamically irreversible, entropy-lowering processes. The act of creating wealth is an act of creating order, but not all order is wealth creating."

He seems to miss the fact that the point of life is to increase the entropy of the universe. The meaning of "order" is confusing, and therefore I personally try to stay away from it. However, entropy is a well-defined term (both for systems in equilibrium and out of equilibrium.)

The structure we see due to life is due to the structure & symmetry inside of the equations of nonlinear, non-equilibrium thermodynamics.

Wealth, as I understand it, is related to the capability to do work (measured in kJ) and it is related to the fact that life has the ability to store available work (such as chemical exergy) to overcome activation barriers that would take too long to overcome without the stored work, and in the end, the entropy of the universe increases faster than it would have without the life structure. The structure (or fit order as Eric calls it) is the means to the end, not the end in of itself.

When you defined wealth in terms of exergy, you can avoid the problems in Eric Beinhocker's definition of wealth, and you can avoid this idea of fit order (neither of these terms are measurable.)

So, in conclusion, I'm a big fan of the book, but I have a problem with his definition of wealth as "fit order."

## Monday, October 18, 2010

### Fermion vs. Boson & Irreversibility

Ever wonder why superfluids can move without friction?

Ever wonder why electrons can move in superconductors without resistance?

How can bosons (like superfluid helium or electron pairs) seemingly avoid generating an increase in entropy?

I'm not saying that there's a violation of the 2nd Law here (it's not like entropy is decreasing.) It's more like it's staying flat rather than increasing.

Does non-equilibrium thermodynamics not apply to Bose-Eisenstein condensates? And if so, does this somehow imply that there would be no increase in entropy if all the particles in the universe were B-E condensate?

Irreversibility clearly happens in Fermi-Dirac "condensate" such as metals and in Boltzmann gases, so I'm not sure why B-E condensate is special?

Is it just that irreversible process happens much slower? Or is it that irreversible processes just don't happen at all?

Clearly, I don't understand something. So, if you have any answers to the question ( how can B-E condensate move without generating an increase in entropy?), please let me know in the comment's space.

Thanks

Eddie

(post note: check out the following post where I argue that the difference between bosons and fermions and there ability to generate entropy may be related to the fact that the weak nuclear force is not time symmetry and that it couples to left-handed particles (or right handed anti-particles) only. This suggestions that pairs of electrons that form a pair might not interaction via the nuclear force, and hence might not generate entropy.)

Ever wonder why electrons can move in superconductors without resistance?

How can bosons (like superfluid helium or electron pairs) seemingly avoid generating an increase in entropy?

I'm not saying that there's a violation of the 2nd Law here (it's not like entropy is decreasing.) It's more like it's staying flat rather than increasing.

Does non-equilibrium thermodynamics not apply to Bose-Eisenstein condensates? And if so, does this somehow imply that there would be no increase in entropy if all the particles in the universe were B-E condensate?

Irreversibility clearly happens in Fermi-Dirac "condensate" such as metals and in Boltzmann gases, so I'm not sure why B-E condensate is special?

Is it just that irreversible process happens much slower? Or is it that irreversible processes just don't happen at all?

Clearly, I don't understand something. So, if you have any answers to the question ( how can B-E condensate move without generating an increase in entropy?), please let me know in the comment's space.

Thanks

Eddie

(post note: check out the following post where I argue that the difference between bosons and fermions and there ability to generate entropy may be related to the fact that the weak nuclear force is not time symmetry and that it couples to left-handed particles (or right handed anti-particles) only. This suggestions that pairs of electrons that form a pair might not interaction via the nuclear force, and hence might not generate entropy.)

### Life inside the Sun, cont.

I've thought a little bit more about my blog on "Life inside the Sun."

While the inside of the Sun is far-from equilibrium, I'm not so certain that this is enough for the possibility of life.

As mentioned in the blog "The meaning of life...", I now believe that life involves certain symmetries of nonlinear differential equations that are at least as complicated as the group A5.

The cells in our bodies rely on electrochemical reactions to convert sugars into work. This means that there is an actual gradient in the chemical potential of species like protons (or other ions that can be transported through membranes.)

Where does this physical gradient (with respect to a space dimension like x) come from?

While there's certainly a gradient in the radial direction, how could a life form convert the gradient in "nuclear potential" into work? How could it store work for later use?

I mentioned that there are catalysts inside the Sun (such as carbon), but I'm not sure that catalysts and a non-equilibrium source of exergy are enough for life to form. I think that it needs more than that, and that something is symmetries that can replicate.

I think that the key to understanding life will be to understand how symmetries of differential equations can replicate. I'm not totally sure what it even means, but as I search through the literature I will continue to write about what I find.

While the inside of the Sun is far-from equilibrium, I'm not so certain that this is enough for the possibility of life.

As mentioned in the blog "The meaning of life...", I now believe that life involves certain symmetries of nonlinear differential equations that are at least as complicated as the group A5.

The cells in our bodies rely on electrochemical reactions to convert sugars into work. This means that there is an actual gradient in the chemical potential of species like protons (or other ions that can be transported through membranes.)

Where does this physical gradient (with respect to a space dimension like x) come from?

While there's certainly a gradient in the radial direction, how could a life form convert the gradient in "nuclear potential" into work? How could it store work for later use?

I mentioned that there are catalysts inside the Sun (such as carbon), but I'm not sure that catalysts and a non-equilibrium source of exergy are enough for life to form. I think that it needs more than that, and that something is symmetries that can replicate.

I think that the key to understanding life will be to understand how symmetries of differential equations can replicate. I'm not totally sure what it even means, but as I search through the literature I will continue to write about what I find.

## Sunday, October 17, 2010

### Disproving a Maximum or Minimum Production Rate of Entropy

I recently read a paper from the year 1965, and in the paper, the authors proved that there is no general function way to maximize or minimize the production rate of entropy. (Gage et al. 1965 "The Non-Existence of a General Thermokinetic Variational Principle.")

I think that this is an important statement.

Around the same time in the 1960s, there were "proofs" for both maximization and minimization of the entropy production rate, dS(irr)/dt. While either of these case can be true near equilibrium given certain constraints, these "principles" are not true in general.

So, I just want to follow up on the previous blog by stating that, while the entropy of the universe is increasing, it is not following the "fast possible route." There is no way to predict the future, so how can we ever know that a local increase of the entropy rate could ultimately cause a large slow down in the entropy rate globally.

What if a large explosion was set off locally? (this would cause a rapid increase in the entropy production rate.) However, if this explosion were due to a nuclear weapon, then it could cause a global problem for all life, which would cause the entropy production rate to decrease.

And here's the ultimate problem with "principles of max/min entropy production rate" : the principle is not valid far-from-equilibrium under non-steady-state conditions, but that's exactly where we live. We live on planet that is far-from-equilibrium (if equilibrium is taken to be the ~3 degrees Kelvin vacuum that is most of space, and could ultimately be all of space.)

So, in conclusion, be careful if you ever run into a paper that proves that the entropy of the universe is increasing at its maximum (or minimum) possible route.

All we can say is that the entropy production rate on the Earth is faster than if life were not here. We can not prove that the actions we are taking this very moment are in fact causing the fastest possible rate of entropy production.

I think that this is an important statement.

Around the same time in the 1960s, there were "proofs" for both maximization and minimization of the entropy production rate, dS(irr)/dt. While either of these case can be true near equilibrium given certain constraints, these "principles" are not true in general.

So, I just want to follow up on the previous blog by stating that, while the entropy of the universe is increasing, it is not following the "fast possible route." There is no way to predict the future, so how can we ever know that a local increase of the entropy rate could ultimately cause a large slow down in the entropy rate globally.

What if a large explosion was set off locally? (this would cause a rapid increase in the entropy production rate.) However, if this explosion were due to a nuclear weapon, then it could cause a global problem for all life, which would cause the entropy production rate to decrease.

And here's the ultimate problem with "principles of max/min entropy production rate" : the principle is not valid far-from-equilibrium under non-steady-state conditions, but that's exactly where we live. We live on planet that is far-from-equilibrium (if equilibrium is taken to be the ~3 degrees Kelvin vacuum that is most of space, and could ultimately be all of space.)

So, in conclusion, be careful if you ever run into a paper that proves that the entropy of the universe is increasing at its maximum (or minimum) possible route.

All we can say is that the entropy production rate on the Earth is faster than if life were not here. We can not prove that the actions we are taking this very moment are in fact causing the fastest possible rate of entropy production.

## Monday, October 11, 2010

### The meaning of life...Increasing the entropy of the universe

Okay, here's my train of logic for the meaning of life. It's quite long and rambles at times, but I think that the end result is valid from the starting assumptions. I've broken it down into "Conclusions", "Assumptions", and "Line of Reasoning."

Let me know what you think.

Conclusion: Life is a means of increasing the entropy of the universe. Life is a result of the fact that the equations of dynamics are non-linear, allow for self-replicating structures, and that the starting conditions of the universe are non-equilibrium. The goal of life is to bring the universe to equilibrium at a faster rate than if the equations of dynamics did not allow for life.

Therefore, we as living beings should be trying to increase the entropy of the universe. This means converting as much exergy (such as sunlight) into low grade energy as possible. There are other gradients of exergy that we can take advantage of as well (such as gradients in thermal energy, chemical potential and nuclear potential.) The means to do so are storing "information" (i.e. available electrical/mechanical work) so as to build devices that generate even more entropy. As biologist Stuart Kauffman stated by in "Reinventing the Sacred":

There is a balance between using and storing available work (electrical or mechanical). Unfortunately, there is no way to determine what is the optimal balance between storing and using work that will bring the universe to equilibrium at the fastest rate. (i.e. there is no way to predict the fastest route to equilibrium because we can not calculate far enough into the future to determine which route is the fastest to equilibrium.) So, how does life determine which route to take?

It uses neural nets (with some information of past attempts) to estimate which route will bring the system to equilibrium the fastest. But there's no guarantee that it's the best route. Just as there's no guarantee that the answer to the traveling salesman problem is the optimal solution when using neural nets.

Restated: Life is a means of increasing the entropy of the universe and bringing the universe to a state of equilibrium at a faster rate than without life.

Assumptions: 1) Entropy increases due to collisions between particles because the forces of nature are not time reversible. 2) The universe started in a state of non-equilibrium. 3) The future can not be predicted because of the extreme non-linearity of the governing equations. 4) The dynamic equations of systems are highly non-linear and allow for self-replicating structures 5) The self-replicating attractors found in the dynamic equations has a two-fold effect: a) inability to predict the future, and b) ability to store both work and "information" (This self-replicating nature only occurs for systems far-from-equilibrium.)

Line of Reasoning:

Entropy is the number of microstates available to a given macrostate. Entropy defined this way is only valid for large numbers of particles because as N becomes larger (greater than 100,000), then the macrostate with the most microstates ends up being essentially the only macrostate with an probability of occurring. Another way of stating this is asking the question: what is the N-volume of the last dx of an N-D sphere. As N becomes greater than 100,000, then almost all of the volume of the N-D sphere is located at the edge of the N-D sphere.

For the universe, the macrostate is defined by the total energy and total momentum, which are conserved over time.

Assuming that this universe started with a Big Bang (i.e. all of the energy localized in one location), then this represents a state of low entropy. Even though the temperature would have been very high...and I mean almost unimaginably high, the energy would have been confined to a small region of space. There would not have been many microstates available compared with the microstates available today.

There existed a large gradient in energy at the start of the universe, between the location of energy and the rest of the open space in the universe. Diffusion of energy from a region of high energy to low energy would have started immediately.

It can be shown that entropy is defined for both systems in equilibrium and for systems not-in-equilibrium. (See pg 71 eq 6.4 of Grandy's "Entropy and the Time Evolution of Macrosopic Systems.) Since entropy is defined as the number of microstates for the given macrostate with the most microstates, it is a unitless variable. (Note that you can add dimension to entropy by multiplying by k or R. Its unitless definition is convenient because it means that it's relativistically invariant.)

The universe will always be in the given macrostate with the highest entropy because the number of microstates in the given macrostate is so large compared with neighboring macrostates.

The question is then: how does the universe evolve with time? How does it evolve into macrostates with even larger numbers of microstates? When we look around us, we see that there is always an increase in entropy, but most of us have a hard time understanding why.

At the beginning of the universe, the energy was confined to a small region and the probability of finding a "particle" in a certain region or a "field" with a given quanta of energy was larger than it is today. The probability of guessing the actual microstate of the universe near the Big Bang is a lot larger than the probability of guessing the microstate of the universe right now. This loss of information is a loss in the ability to predict the given microstate of the universe. If the number of microstates increases (i.e. entropy increases), then our ability to guess the actual microstate decreases. If we start a confined system in a given microstate, over time we lose information about the actual microstate.

For example, if we start a system with 1,000 particles on left side of a box and then remove the object constraining the particles to the left side of the box. We lose information about the actual microstate as the particles collide. Over time, our ability to predict the given microstate decreases, but at the same time, the symmetry of the system is increasing. Over time, the system will be in the macrostate with the most number microstates. This turns out to be the case in which there is left/right symmetry between the half-way point in the box.

The symmetry of the universe has increased, and then is part of a general trend that "the symmetry of the effect is greater than the symmetry of the cause." (i.e. the Rosen-Curie Principle) (Note that this principle is not violated by nonlinear phenomena, such as Rayleigh-Benard convection cells...it's the total symmetry of the universe that increases because of the increased heat conduction rate.)

The Rosen-Curie Principle is another way of stating the 2nd Law of Thermodynamics, i.e. the number of microstates of the macrostate with the most microstates is increasing with time. [Note: that this means that time can not be reversed. And since there is no symmetry with respect to time reversal, there is no conservation of entropy.]

Note: The idea of increasing symmetry is almost exactly opposite of what's taught to undergraduates in freshman level physics. They are taught that an increas in entropy is equal to an increase in "disorder." This is a incorrect statement and, worse, it's unquantifiable. How do you quantify "disorder" ? You can't. Instead, you can quantify "number of microstates for the macrostate with the larger number of microstates." It's dimensionless and it's relativistically-invariant. You can also quantify the number of symmetries.

So, now that we've seen that entropy increases, we can see where the universe is heading towards. It's heading towards a state of complete symmetry. The final resting state of the universe would be a homogenous state of constant temperature, pressure, chemical potential and nuclear potential. Depending on the size of the universe and questions regarding inflation and proton stability, this could be a state of dispersed iron (which is the state of lowest nuclear Gibbs free energy.) The actual value of the pressure, temperature, and chemical potential depend greatly on whether the universe will continue to just expand into the vastness of space.

If it continues to expand, there may never be a final state of equilibrium, but what we can say is that it will be more symmetric than it is today.

So, going back to the question of life, we have to ask: how does life fit into the picture? Where did life come from and what's the purpose?

Here is my round-about answer to that question. I'm going to address this question by going through the different levels of complexity as one moves from systems in equilibrium to systems far-from-equilibrium. My understanding is that systems far-from-equilibrium are trying to reach equilibrium at the fast possible rate that is allowed by the given constraints in the system.

I see the following levels of complexity:

1) Equilibrium (complete homogeneity) There is symmetry in time and symmetry in space (i.e. the pressure, temperature, electrochemical potential, etc. are constants and not varying with space or time)

2) Linear Non-equilibrium (Gradient in temperature, pressure, electrochemical potential, etc.) But the gradient is small so that the non-linear equation become linear. This is best seen with Ohm's law, in which the directed velocity of electrons due to the gradient of electrical potential is small compared to their thermal speed.

3) Non-linear Non-equilibrium of degree one: The non-linear equations allow for structures to appear that are time independent or time dependent structures that are space-independent. (Such as time independent Ralyeigh-Benard convection cells) This requires a non-linear term in the dynamical equations of the system.

4) Non-linear Non-equilibrium of degree two: The non-linear equations allow for structures to appear that are time dependent. (Such as time dependent Ralyeigh-Benard convection cells) This requires that the system be even further from equilibrium and for non-linear terms.

5) Non-linear Non-equilibrium of degree three/four: Chaotic motion of the system (This also requires a gradient in a potential and it requires that there be a term of cubic power in the equations of motion.) (An example would be a Ralyeigh-Benard cell driven with an extreme temperature gradient such that the cells fluctuate chaotically, i.e. a broad spectrum of frequencies) (We'll use degree three to represent structures with one positive eigenvalue and degree four to represent systems with multiple positive eigenvalues.)

6) Non-linear Non-equilibrium of degree five: Self-referential equations of motion for the system. For systems far-from-equilibrium, there is the possibility that equations can refer back to themselves. These are structures that are formed that can replicate. These structures require a source of exergy (such as a gradient in pressure, chemical potential, etc...) to replicate, but they don't immediately disappear when the source of exergy is turned off. This is due to the fact that exergy is stored within the structure itself. (By exergy, I mean the available work in moving a non-equilibrium system to equilibrium with its environment.) If there is no new source of exergy, then the structure will eventually stop moving and will eventually disappear, like a Ralyeigh-Benard cell disappears after the temperature gradient is removed. At equilibrium, all such structures will disappear.

These structures are capable of storing "information" (i.e. storing gradients in exergy that can be used to generate work), which can be used to generate more "information." "Information return on information investment" Though, the final goal is not more information...the final goal is equilibrium. The "information" is used to speed up the process of reaching equilibrium.

Summary:Equilibrium: no eigenvalues (i.e. no stable structures)

Linear-Non-Equilibrium: negative eigenvalues (i.e. no stable structures, such as convection cells)

Non-linear Non-Equilibrium of order one and two: complex negative eigenvalues (convection cells can form)

Non-linear Non-Equilibrium of order three: one positive eigenvalue (strange attractor has a combination of positive and negative eigenvalues.) (time-varying structures can form)

Non-linear Non-Equilibrium of order four: at least two positive eigenvalues (complex, time-varying structures can form)

Non-linear Non-Equilibrium of order five: the structure is not solvable, i.e. the group describing the eigenvalues is at least as complicated as the group A5 (which is the first nonabelian simple group.)(i.e. Living structures can appear when you are this far way from equilibrium.) The structure is more complicated than the "structure" of a Rayleigh-Benard cell because it has a level of self-reference that allows for replication. The group A5 is the most basic of the building block for order higher groups. One could say that life is about the building of nonabelian, simple structures that survive off a gradient in exergy in order to increase the rate at which the universe reaches equilibrium. (at which point, the structures disappear.)

When the equations of motion allow for "attractors" (i.e. dissipative structures) with symmetries that form a nonabealian, simple group, the structure is capable of replicating. For some higher level of symmetry, there must be the ability for the structure to store information, and it's unclear to me right now what level of group theory is required to allow for storage of exergy. What is clear is that some structures can store "available work" for later use.

The stored "available chemical or mechanical work" is used to overcome the activation energy of chemical reactions. At any given moment in time, the entropy of the universe must increase, so the storage of "available work" must itself generate enough entropy so that at no point in time does the entropy of the universe decrease. With life, we can see that at each step in converting sunlight into stored chemical energy (such as using sunlight to convert ADP to ATP, which can then be used to generate complex carbohydrates), the entropy of the universe increases. There is then a large increase entropy when the complex carbohydrates are oxidized. In that process of oxidation, a large amount of work can be generated. It can either be used to storage more "work" (such as moving against a gravitational field) or can be used to move to a location of a larger gradient in chemical exergy.

There is a balance between using and storing work (electrical or mechanical). Unfortunately, there is no way to determine what is the optimal balance between storing and using work that will bring the universe to equilibrium at the fastest rate. This is due to the fact that there is no way to predict the fastest route to equilibrium because we can not calculate far enough into the future to determine which route is the fastest to equilibrium. So, how does life determine which route to take?

For basic life forms, they always follow the location of the largest gradient in chemical exergy. For more advanced life forms, there are neural nets that store information about the past to predict the future. The predictions are not correct, but over time, the structures build larger and larger neutral nets to better predict the future. Since there is no way to predict the future, there is no right answer. But it appears that the best answer is to generate the largest rate of return on work invested (and note that this is doesn't always mean the fastest replicating structure.) Over time, though, we can see that there is a general trend towards more self-reference, and larger neural networks to predict the future. This involves greater storage of exergy to unleash even more available work. But as I said before, there is no right answer. There is no optimization of the fastest route to equilibrium, so bigger, more complex structures may not necessarily be the best route to increase the entropy of the universe. Though, one clear way to increase the entropy of the universe is to deverlop self-replicating solar robots on other planets so as to increase the entropy of the entire universe.

Restated: life is a means of increasing the entropy of the universe and bringing it to a state of equilibrium at a faster rate than without life. Life only occurs when there is a source of exergy (such as gradients in temperature, pressure or chemical potential with respect to the environment) and when the dynamical equations allow for dissipative structures (i.e. attractors) with symmetries at least as complex as A5 (the first nonabelian, simple group.)

Let me know what you think.

Conclusion: Life is a means of increasing the entropy of the universe. Life is a result of the fact that the equations of dynamics are non-linear, allow for self-replicating structures, and that the starting conditions of the universe are non-equilibrium. The goal of life is to bring the universe to equilibrium at a faster rate than if the equations of dynamics did not allow for life.

Therefore, we as living beings should be trying to increase the entropy of the universe. This means converting as much exergy (such as sunlight) into low grade energy as possible. There are other gradients of exergy that we can take advantage of as well (such as gradients in thermal energy, chemical potential and nuclear potential.) The means to do so are storing "information" (i.e. available electrical/mechanical work) so as to build devices that generate even more entropy. As biologist Stuart Kauffman stated by in "Reinventing the Sacred":

*Cells do some combination of mechanical, chemical, electrochemical and other work and work cycles in a web of propagating organization of processes that often link spontaneous and non-spontaneous processes…Given boundary conditions, physicists state the initial conditions, particles , and forces, and solve the equations for the subsequent dynamics—here , the motion of the piston. But in the real universe we can ask, “Where do the constraints themselves come from?” It takes real work to construct the cylinder and the piston, place one inside the other, and then inject the gas…It takes work to constrain the release of energy, which, when released, constitutes work…This is part of what cells do when they propagate organization of process. They have evolved to do work to construct constraints on the release of energy that in turn does further work, including the construction of many things such as microtubules, but also construction of more constraints …Indeed, cells build a richly interwoven web of boundary conditions that further constrains the release of energy so as to build yet more boundary conditions.*There is a balance between using and storing available work (electrical or mechanical). Unfortunately, there is no way to determine what is the optimal balance between storing and using work that will bring the universe to equilibrium at the fastest rate. (i.e. there is no way to predict the fastest route to equilibrium because we can not calculate far enough into the future to determine which route is the fastest to equilibrium.) So, how does life determine which route to take?

It uses neural nets (with some information of past attempts) to estimate which route will bring the system to equilibrium the fastest. But there's no guarantee that it's the best route. Just as there's no guarantee that the answer to the traveling salesman problem is the optimal solution when using neural nets.

Restated: Life is a means of increasing the entropy of the universe and bringing the universe to a state of equilibrium at a faster rate than without life.

Assumptions: 1) Entropy increases due to collisions between particles because the forces of nature are not time reversible. 2) The universe started in a state of non-equilibrium. 3) The future can not be predicted because of the extreme non-linearity of the governing equations. 4) The dynamic equations of systems are highly non-linear and allow for self-replicating structures 5) The self-replicating attractors found in the dynamic equations has a two-fold effect: a) inability to predict the future, and b) ability to store both work and "information" (This self-replicating nature only occurs for systems far-from-equilibrium.)

Line of Reasoning:

Entropy is the number of microstates available to a given macrostate. Entropy defined this way is only valid for large numbers of particles because as N becomes larger (greater than 100,000), then the macrostate with the most microstates ends up being essentially the only macrostate with an probability of occurring. Another way of stating this is asking the question: what is the N-volume of the last dx of an N-D sphere. As N becomes greater than 100,000, then almost all of the volume of the N-D sphere is located at the edge of the N-D sphere.

For the universe, the macrostate is defined by the total energy and total momentum, which are conserved over time.

Assuming that this universe started with a Big Bang (i.e. all of the energy localized in one location), then this represents a state of low entropy. Even though the temperature would have been very high...and I mean almost unimaginably high, the energy would have been confined to a small region of space. There would not have been many microstates available compared with the microstates available today.

There existed a large gradient in energy at the start of the universe, between the location of energy and the rest of the open space in the universe. Diffusion of energy from a region of high energy to low energy would have started immediately.

It can be shown that entropy is defined for both systems in equilibrium and for systems not-in-equilibrium. (See pg 71 eq 6.4 of Grandy's "Entropy and the Time Evolution of Macrosopic Systems.) Since entropy is defined as the number of microstates for the given macrostate with the most microstates, it is a unitless variable. (Note that you can add dimension to entropy by multiplying by k or R. Its unitless definition is convenient because it means that it's relativistically invariant.)

The universe will always be in the given macrostate with the highest entropy because the number of microstates in the given macrostate is so large compared with neighboring macrostates.

The question is then: how does the universe evolve with time? How does it evolve into macrostates with even larger numbers of microstates? When we look around us, we see that there is always an increase in entropy, but most of us have a hard time understanding why.

At the beginning of the universe, the energy was confined to a small region and the probability of finding a "particle" in a certain region or a "field" with a given quanta of energy was larger than it is today. The probability of guessing the actual microstate of the universe near the Big Bang is a lot larger than the probability of guessing the microstate of the universe right now. This loss of information is a loss in the ability to predict the given microstate of the universe. If the number of microstates increases (i.e. entropy increases), then our ability to guess the actual microstate decreases. If we start a confined system in a given microstate, over time we lose information about the actual microstate.

For example, if we start a system with 1,000 particles on left side of a box and then remove the object constraining the particles to the left side of the box. We lose information about the actual microstate as the particles collide. Over time, our ability to predict the given microstate decreases, but at the same time, the symmetry of the system is increasing. Over time, the system will be in the macrostate with the most number microstates. This turns out to be the case in which there is left/right symmetry between the half-way point in the box.

The symmetry of the universe has increased, and then is part of a general trend that "the symmetry of the effect is greater than the symmetry of the cause." (i.e. the Rosen-Curie Principle) (Note that this principle is not violated by nonlinear phenomena, such as Rayleigh-Benard convection cells...it's the total symmetry of the universe that increases because of the increased heat conduction rate.)

The Rosen-Curie Principle is another way of stating the 2nd Law of Thermodynamics, i.e. the number of microstates of the macrostate with the most microstates is increasing with time. [Note: that this means that time can not be reversed. And since there is no symmetry with respect to time reversal, there is no conservation of entropy.]

Note: The idea of increasing symmetry is almost exactly opposite of what's taught to undergraduates in freshman level physics. They are taught that an increas in entropy is equal to an increase in "disorder." This is a incorrect statement and, worse, it's unquantifiable. How do you quantify "disorder" ? You can't. Instead, you can quantify "number of microstates for the macrostate with the larger number of microstates." It's dimensionless and it's relativistically-invariant. You can also quantify the number of symmetries.

So, now that we've seen that entropy increases, we can see where the universe is heading towards. It's heading towards a state of complete symmetry. The final resting state of the universe would be a homogenous state of constant temperature, pressure, chemical potential and nuclear potential. Depending on the size of the universe and questions regarding inflation and proton stability, this could be a state of dispersed iron (which is the state of lowest nuclear Gibbs free energy.) The actual value of the pressure, temperature, and chemical potential depend greatly on whether the universe will continue to just expand into the vastness of space.

If it continues to expand, there may never be a final state of equilibrium, but what we can say is that it will be more symmetric than it is today.

So, going back to the question of life, we have to ask: how does life fit into the picture? Where did life come from and what's the purpose?

Here is my round-about answer to that question. I'm going to address this question by going through the different levels of complexity as one moves from systems in equilibrium to systems far-from-equilibrium. My understanding is that systems far-from-equilibrium are trying to reach equilibrium at the fast possible rate that is allowed by the given constraints in the system.

I see the following levels of complexity:

1) Equilibrium (complete homogeneity) There is symmetry in time and symmetry in space (i.e. the pressure, temperature, electrochemical potential, etc. are constants and not varying with space or time)

2) Linear Non-equilibrium (Gradient in temperature, pressure, electrochemical potential, etc.) But the gradient is small so that the non-linear equation become linear. This is best seen with Ohm's law, in which the directed velocity of electrons due to the gradient of electrical potential is small compared to their thermal speed.

3) Non-linear Non-equilibrium of degree one: The non-linear equations allow for structures to appear that are time independent or time dependent structures that are space-independent. (Such as time independent Ralyeigh-Benard convection cells) This requires a non-linear term in the dynamical equations of the system.

4) Non-linear Non-equilibrium of degree two: The non-linear equations allow for structures to appear that are time dependent. (Such as time dependent Ralyeigh-Benard convection cells) This requires that the system be even further from equilibrium and for non-linear terms.

5) Non-linear Non-equilibrium of degree three/four: Chaotic motion of the system (This also requires a gradient in a potential and it requires that there be a term of cubic power in the equations of motion.) (An example would be a Ralyeigh-Benard cell driven with an extreme temperature gradient such that the cells fluctuate chaotically, i.e. a broad spectrum of frequencies) (We'll use degree three to represent structures with one positive eigenvalue and degree four to represent systems with multiple positive eigenvalues.)

6) Non-linear Non-equilibrium of degree five: Self-referential equations of motion for the system. For systems far-from-equilibrium, there is the possibility that equations can refer back to themselves. These are structures that are formed that can replicate. These structures require a source of exergy (such as a gradient in pressure, chemical potential, etc...) to replicate, but they don't immediately disappear when the source of exergy is turned off. This is due to the fact that exergy is stored within the structure itself. (By exergy, I mean the available work in moving a non-equilibrium system to equilibrium with its environment.) If there is no new source of exergy, then the structure will eventually stop moving and will eventually disappear, like a Ralyeigh-Benard cell disappears after the temperature gradient is removed. At equilibrium, all such structures will disappear.

These structures are capable of storing "information" (i.e. storing gradients in exergy that can be used to generate work), which can be used to generate more "information." "Information return on information investment" Though, the final goal is not more information...the final goal is equilibrium. The "information" is used to speed up the process of reaching equilibrium.

Summary:Equilibrium: no eigenvalues (i.e. no stable structures)

Linear-Non-Equilibrium: negative eigenvalues (i.e. no stable structures, such as convection cells)

Non-linear Non-Equilibrium of order one and two: complex negative eigenvalues (convection cells can form)

Non-linear Non-Equilibrium of order three: one positive eigenvalue (strange attractor has a combination of positive and negative eigenvalues.) (time-varying structures can form)

Non-linear Non-Equilibrium of order four: at least two positive eigenvalues (complex, time-varying structures can form)

Non-linear Non-Equilibrium of order five: the structure is not solvable, i.e. the group describing the eigenvalues is at least as complicated as the group A5 (which is the first nonabelian simple group.)(i.e. Living structures can appear when you are this far way from equilibrium.) The structure is more complicated than the "structure" of a Rayleigh-Benard cell because it has a level of self-reference that allows for replication. The group A5 is the most basic of the building block for order higher groups. One could say that life is about the building of nonabelian, simple structures that survive off a gradient in exergy in order to increase the rate at which the universe reaches equilibrium. (at which point, the structures disappear.)

When the equations of motion allow for "attractors" (i.e. dissipative structures) with symmetries that form a nonabealian, simple group, the structure is capable of replicating. For some higher level of symmetry, there must be the ability for the structure to store information, and it's unclear to me right now what level of group theory is required to allow for storage of exergy. What is clear is that some structures can store "available work" for later use.

The stored "available chemical or mechanical work" is used to overcome the activation energy of chemical reactions. At any given moment in time, the entropy of the universe must increase, so the storage of "available work" must itself generate enough entropy so that at no point in time does the entropy of the universe decrease. With life, we can see that at each step in converting sunlight into stored chemical energy (such as using sunlight to convert ADP to ATP, which can then be used to generate complex carbohydrates), the entropy of the universe increases. There is then a large increase entropy when the complex carbohydrates are oxidized. In that process of oxidation, a large amount of work can be generated. It can either be used to storage more "work" (such as moving against a gravitational field) or can be used to move to a location of a larger gradient in chemical exergy.

There is a balance between using and storing work (electrical or mechanical). Unfortunately, there is no way to determine what is the optimal balance between storing and using work that will bring the universe to equilibrium at the fastest rate. This is due to the fact that there is no way to predict the fastest route to equilibrium because we can not calculate far enough into the future to determine which route is the fastest to equilibrium. So, how does life determine which route to take?

For basic life forms, they always follow the location of the largest gradient in chemical exergy. For more advanced life forms, there are neural nets that store information about the past to predict the future. The predictions are not correct, but over time, the structures build larger and larger neutral nets to better predict the future. Since there is no way to predict the future, there is no right answer. But it appears that the best answer is to generate the largest rate of return on work invested (and note that this is doesn't always mean the fastest replicating structure.) Over time, though, we can see that there is a general trend towards more self-reference, and larger neural networks to predict the future. This involves greater storage of exergy to unleash even more available work. But as I said before, there is no right answer. There is no optimization of the fastest route to equilibrium, so bigger, more complex structures may not necessarily be the best route to increase the entropy of the universe. Though, one clear way to increase the entropy of the universe is to deverlop self-replicating solar robots on other planets so as to increase the entropy of the entire universe.

Restated: life is a means of increasing the entropy of the universe and bringing it to a state of equilibrium at a faster rate than without life. Life only occurs when there is a source of exergy (such as gradients in temperature, pressure or chemical potential with respect to the environment) and when the dynamical equations allow for dissipative structures (i.e. attractors) with symmetries at least as complex as A5 (the first nonabelian, simple group.)

## Thursday, August 12, 2010

### FutureGen2.0 Announcement

So, the Dept of Energy announced last week that the original FutureGen project got a huge make-over. This seems like a huge deal. It's possibly one of the biggest events of the year as far as advanced energy.

It also seems to me that this change to FutureGen2.0 could kill the project. The project went from IGCC (Integrated Gasification Combined Cycle where the CO2 is captured at pressure from a mix of H2/CO/H2O/CO2) to oxy-combustion (where the CO2 is capture from flue gases...probably near atmospheric pressure.)

This is a complete redesign of the system and it has nothing to do with the original conception of FutureGen.

Why does it take $1 billion to retrofit an existing power plant to make it an oxy-combustion plant? For $1 billion, we should be able to retrofit at least 5 existing plants. What happened to the other parts of FutureGen? The IGCC part was also supposed to demonstrate the production of pure hydrogen (for sale) and the use of pure hydrogen in gas turbines?

How did the entire scope of the project magically get changed? I'm really mad about the whole thing, and I'm really pissed at the Dep. of Energy MGMT (past and present administrations) for their complete inability to stick to one project and see it through to the end.

There are a bunch of different advanced coal power cycles, and they are all about equally rated if CO2 has to be captured (Conventional coal combustion with capture/oxy-combustion/IGCC with capture.) As I stated earlier, the DOE needs to be consistent. There was a bid process about seven years ago. As soon as a winner was picked, the Bush admin DOE stopped the process because they didn't like the winner. So the project got axed...until the Recovery Act, when it got revived...but now it got changed, and I don't think that there was a new bid process. This seems illegal if this is the case. And I suggest that we should all do some research to see if there was a new bid process before the latest "winner" was announced.

Why does it seem that we are unable to follow through with big projects? The same thing has been going on in the fusion community and their big project "ITER," which has been off&on since the Regan admin.

So, to repeat, I'm truly pessimistic right now. It's days like this that I wish that the government would pass a cap™ bill and then get the hell out of the energy business. The US gov't seems incapable of solving the combined energy problems, which are: 1) diminishing supply of liquid hydrocarbons, 2) emission of pollutants and greenhouse gases, 3) energy security of the remaining liquid hydrocarbons, and 4) the desire for cheap fuels and electricity.

Any thoughts???

It also seems to me that this change to FutureGen2.0 could kill the project. The project went from IGCC (Integrated Gasification Combined Cycle where the CO2 is captured at pressure from a mix of H2/CO/H2O/CO2) to oxy-combustion (where the CO2 is capture from flue gases...probably near atmospheric pressure.)

This is a complete redesign of the system and it has nothing to do with the original conception of FutureGen.

Why does it take $1 billion to retrofit an existing power plant to make it an oxy-combustion plant? For $1 billion, we should be able to retrofit at least 5 existing plants. What happened to the other parts of FutureGen? The IGCC part was also supposed to demonstrate the production of pure hydrogen (for sale) and the use of pure hydrogen in gas turbines?

How did the entire scope of the project magically get changed? I'm really mad about the whole thing, and I'm really pissed at the Dep. of Energy MGMT (past and present administrations) for their complete inability to stick to one project and see it through to the end.

There are a bunch of different advanced coal power cycles, and they are all about equally rated if CO2 has to be captured (Conventional coal combustion with capture/oxy-combustion/IGCC with capture.) As I stated earlier, the DOE needs to be consistent. There was a bid process about seven years ago. As soon as a winner was picked, the Bush admin DOE stopped the process because they didn't like the winner. So the project got axed...until the Recovery Act, when it got revived...but now it got changed, and I don't think that there was a new bid process. This seems illegal if this is the case. And I suggest that we should all do some research to see if there was a new bid process before the latest "winner" was announced.

Why does it seem that we are unable to follow through with big projects? The same thing has been going on in the fusion community and their big project "ITER," which has been off&on since the Regan admin.

So, to repeat, I'm truly pessimistic right now. It's days like this that I wish that the government would pass a cap™ bill and then get the hell out of the energy business. The US gov't seems incapable of solving the combined energy problems, which are: 1) diminishing supply of liquid hydrocarbons, 2) emission of pollutants and greenhouse gases, 3) energy security of the remaining liquid hydrocarbons, and 4) the desire for cheap fuels and electricity.

Any thoughts???

## Thursday, June 24, 2010

### Self-Replicating Chemistry

I've been reading "Godel, Escher, Bach" by Douglas Hofstadter and it's got me thinking a lot about the definition of life.

As I've mentioned in previous posts, if we want to find life, we need to think outside of the box.

We need to focus on finding systems far-from-equilibrium.

The early Earth would have had the following components far-from-equilibrium: sunlight, volcanoes and lightning. (And probably more)

Initials studies of life (Urey-Miller), took advantage of lightning to create a molecules in concentrations that are not-in-equilibrium with an environment at 300 K. The lightning (plasma) creates species that may be in equilibrium at 3000 K, but are not-in-equilibrium when the system cools down to 300 K. The lightning relies on the non-equilibrium formation of clouds and electron transport via droplets, which is in turn driven by the real source of non-equilibrium in the solar system (the Sun).

The questions that remain for me are:

Where else are there far-from-equilibrium situations?

How do life structures store exergy? When I see non-equilibrium dissipative structures (like Jupiter's Red Eye, or a hurricane, or Rayleigh-Benard convection cells), I don't see them as capable of storing exergy or self-replicating.

We need to find the differential equations (and the symmetries inside the DifEq's) that allow for self-replication and storage of information or exergy.

In a previous blog, I mentioned that I think that the symmetries have to be quite large (the smallest building blog is probably A5...a group with 60 symmetry operations.) My feeling is that A5 is just the smallest building block, and the actual differential equations that model life has even larger group structures.

My gut feeling (since I have no way yet to prove it) is that the group of symmetries of the differential equations is so complex that Godel's Theorem applies. (For example, the "integers under addition" is not complicated enough for Godel's theorem to apply, but when you include multiplication and other functions, then you get to the point at which there is an automorphism between the numbers and the operators. Somehow, this allows the system to self-replicate...don't worry if you don't follow me it because I don't understand this is either. I'm still trying to figure out how automorphism could lead to self-replication. (Let me know if you have any thoughts.)

Because...ultimately, we need to be able to describe biology in terms of chemistry and in terms of differential equations. So, we have to start looking for symmetries or building blocks inside of the equations (not just the physical chemicals themselves...such as RNA or DNA). Clearly, if you just look for the molecules (RNA or DNA), then you end up with a chicken or the egg argument. (i.e. is life just RNA/DNA or did life make RNA?)

We need to find the symmetries inside the differential equations just as we can find certain symmetries inside of the differential equations that allow for dissipative structures, such as Rayleigh-Benard cells or perhaps such as the ~ 11 (22) year cycles in the number/location of sunspots.

In the end, life structures use their ability to store exergy and to replicate in order to speed up the production of entropy to bring the universe closer to equilibrium.

As I've mentioned in previous posts, if we want to find life, we need to think outside of the box.

We need to focus on finding systems far-from-equilibrium.

The early Earth would have had the following components far-from-equilibrium: sunlight, volcanoes and lightning. (And probably more)

Initials studies of life (Urey-Miller), took advantage of lightning to create a molecules in concentrations that are not-in-equilibrium with an environment at 300 K. The lightning (plasma) creates species that may be in equilibrium at 3000 K, but are not-in-equilibrium when the system cools down to 300 K. The lightning relies on the non-equilibrium formation of clouds and electron transport via droplets, which is in turn driven by the real source of non-equilibrium in the solar system (the Sun).

The questions that remain for me are:

Where else are there far-from-equilibrium situations?

How do life structures store exergy? When I see non-equilibrium dissipative structures (like Jupiter's Red Eye, or a hurricane, or Rayleigh-Benard convection cells), I don't see them as capable of storing exergy or self-replicating.

We need to find the differential equations (and the symmetries inside the DifEq's) that allow for self-replication and storage of information or exergy.

In a previous blog, I mentioned that I think that the symmetries have to be quite large (the smallest building blog is probably A5...a group with 60 symmetry operations.) My feeling is that A5 is just the smallest building block, and the actual differential equations that model life has even larger group structures.

My gut feeling (since I have no way yet to prove it) is that the group of symmetries of the differential equations is so complex that Godel's Theorem applies. (For example, the "integers under addition" is not complicated enough for Godel's theorem to apply, but when you include multiplication and other functions, then you get to the point at which there is an automorphism between the numbers and the operators. Somehow, this allows the system to self-replicate...don't worry if you don't follow me it because I don't understand this is either. I'm still trying to figure out how automorphism could lead to self-replication. (Let me know if you have any thoughts.)

Because...ultimately, we need to be able to describe biology in terms of chemistry and in terms of differential equations. So, we have to start looking for symmetries or building blocks inside of the equations (not just the physical chemicals themselves...such as RNA or DNA). Clearly, if you just look for the molecules (RNA or DNA), then you end up with a chicken or the egg argument. (i.e. is life just RNA/DNA or did life make RNA?)

We need to find the symmetries inside the differential equations just as we can find certain symmetries inside of the differential equations that allow for dissipative structures, such as Rayleigh-Benard cells or perhaps such as the ~ 11 (22) year cycles in the number/location of sunspots.

In the end, life structures use their ability to store exergy and to replicate in order to speed up the production of entropy to bring the universe closer to equilibrium.

### Life inside the Sun

Here's a thought-provoking question:

Is there life inside the middle of the sun???

Although it sounds like a silly question, it should not be an easily brushed off question.

By life, I mean a self-replicating mechanism for increasing the rate of entropy production of a system far-from-equilibrium.

So, is the Sun a non-equilibrium environment? Yes

There is a non-equilibrium composition of hydrogen. In equilibrium, the composition would contain significantly more helium...and, well, eventually iron.

Is it a linear non-equilibrium environment? (Such as linear gradient in temperature) No

The Sun is far from equilibrium because the actual composition of hydrogen is far from the eventual composition (helium or iron).

So, we know that there are currents inside the Sun's Radiative Zone (like free convection currents) that aid in the transport of energy from the center of Sun to the photosphere.

The question is: is the center of the Sun far enough away from equilibrium that self-replicating processes can thrive?

For example, bacteria on Earth use catalysts to speed up the process converting sugars into CO2 and H2O. The energy derived in the oxidation process is used to grow more bacteria.

In the Sun's Core, there are also catalysts (such as carbon) that are used to speed up the process of converting hydrogen into helium. Is there someway to store the energy derived in the 4H to He process? If so, could the stored energy be used to create a copy of the "machine" that sped up the conversion of H to He?

While we are used to life being hydrocarbon-based, I think that we need to look outside of the box and extend our search for life in the universe. If life is defined as self-replicating mechanisms that increase the rate of entropy production in systems far from equilibrium, then we may have a much better chance of finding life than if we focus just on looking for plants and animals on other planets.

Let me know what you think.

[Note: see comment below and the following post. Right now, I'm less inclined to think that replicating structures exist in the center of the sun. Though, there are probably some dynamic structures in there even more complex than Jupiter's Red Eye.]

Is there life inside the middle of the sun???

Although it sounds like a silly question, it should not be an easily brushed off question.

By life, I mean a self-replicating mechanism for increasing the rate of entropy production of a system far-from-equilibrium.

So, is the Sun a non-equilibrium environment? Yes

There is a non-equilibrium composition of hydrogen. In equilibrium, the composition would contain significantly more helium...and, well, eventually iron.

Is it a linear non-equilibrium environment? (Such as linear gradient in temperature) No

The Sun is far from equilibrium because the actual composition of hydrogen is far from the eventual composition (helium or iron).

So, we know that there are currents inside the Sun's Radiative Zone (like free convection currents) that aid in the transport of energy from the center of Sun to the photosphere.

The question is: is the center of the Sun far enough away from equilibrium that self-replicating processes can thrive?

For example, bacteria on Earth use catalysts to speed up the process converting sugars into CO2 and H2O. The energy derived in the oxidation process is used to grow more bacteria.

In the Sun's Core, there are also catalysts (such as carbon) that are used to speed up the process of converting hydrogen into helium. Is there someway to store the energy derived in the 4H to He process? If so, could the stored energy be used to create a copy of the "machine" that sped up the conversion of H to He?

While we are used to life being hydrocarbon-based, I think that we need to look outside of the box and extend our search for life in the universe. If life is defined as self-replicating mechanisms that increase the rate of entropy production in systems far from equilibrium, then we may have a much better chance of finding life than if we focus just on looking for plants and animals on other planets.

Let me know what you think.

[Note: see comment below and the following post. Right now, I'm less inclined to think that replicating structures exist in the center of the sun. Though, there are probably some dynamic structures in there even more complex than Jupiter's Red Eye.]

Subscribe to:
Posts (Atom)