Saturday, February 15, 2014

When Particles Collide, Nature Acts Programmatically, As If It Had Ideas

Let us take a very close look at some important laws of nature. When you go to the trouble of looking very closely at these laws, you may end up being stunned by their seemingly programmatic aspects, and you may end up getting some insight into just how apparently methodical and conceptual the laws of nature are.

The laws I refer to are some laws that are followed when subatomic particles collide at high speed. In recent years scientists at the Large Hadron Collider and other particle accelerators have been busy smashing together particles at very high speeds. The Large Hadron Collider is the world's largest particle accelerator, and consists of a huge underground ring some 17 miles wide.

The Large Hadron Collider accelerates protons (tiny subatomic particles) to near the speed of light. The scientists accelerate two globs of protons to a speed of more than 100,000 miles per second, one glob going in one direction in the huge ring, and another glob going in the other direction. The scientists then get some of these protons to smash into each other.

A result of such a collision (from a site describing a different particle accelerator) is depicted below. The caption of this image stated: “A collision of gold nuclei in the STAR experiment at RHIC creates a fireball of pure energy from which thousands of new particles are born.” 
 
particle collision

Such a high-speed collision of protons or nuclei can produce more than 100 “daughter particles” that result from the collision. The daughter particles are rather like the pieces of glass you might get if you and your friend hurled two glass balls at each other, and the balls collided (please don't ever try this). Here is a more schematic depiction of a one of the simplest particle collisions (others are much more complicated):

particle collision


The results of a collision like that shown in the first image may seem like a random mess, but nature actually follows quite a few laws when such collisions occur. The first law I will discuss is one that there is no name for, even though there should be. This is the law we might call the Law of the Five Allowed Stable Particles. This is simply the law that the stable long-lived output particles created from any very high-speed subatomic particle collision are always particles on the following short list:

Particle Rest Mass Electric Charge
Proton 1.67262177×10−27 kg 1.602176565×10−19 Coulomb


Neutron 1.674927351 ×10−27 kg 0
Electron 9.10938291 ×10−27 kg -1.602176565×10−19 Coulomb


Photon 0 0
Neutrino Many times smaller than electron mass 0

I am not mentioning antiparticles on this list, because such particles are destroyed as soon as they as come in contact with regular particles, so they end up having a lifetime of less than a few seconds.

This Law of the Five Allowed Stable Particles is not at all a trivial law, and raises the serious question: how is it that nature favors only these five particles? Why is it that high-speed subatomic particle collisions don't produce stable particles with thousands of different random masses and thousands of different random electric charges? It is as if nature has inherent within it the idea of a proton, the idea of an electron, the idea of a neutron, the idea of a photon, and the idea of a neutrino.

When particles collide at high speeds, nature also follows what are called conservation laws. Below is a table describing the conservation laws that are followed in high-speed subatomic particle collisions. Particles with positive charge are shown in blue; particles with negative charge are shown in red; and unstable particles are italicized (practically speaking, antiparticles are unstable because they quickly combine with regular particles and are converted to energy, so I'll count those as unstable particles). The particles listed before the → symbol are the inputs of the collision, and the particles after the → symbol are the outputs of the collision. The → symbol basically means “the collision creates this.”

Law Description Example of particle collision or decay allowed under law Example of particle collision or decay prohibited under law
law of the conservation of mass-energy
The mass-energy of the outputs of a particle collision cannot exceed the mass-energy of the inputs of the collision proton + protonproton+neutron + positron+electron neutrino electron+electron
antiproton+
electron (prohibited because an antiproton is almost a thousand times more massive than two electrons)
law of the conservation of charge
The ratio between the proton-like charges (called “positive” and shown here in blue) and the electron-like charges (called “negative” and shown here in red) in the outputs of a particle collision must be the same as the ratio was in the inputs of the collision proton + protonproton+neutron + positron +electron neutrino (two proton-like charges in input, two proton-like charges in output)


At higher collision energies:
proton + protonproton+proton+ proton+antiproton


proton + protonproton+neutron +electron+electron neutrino (two proton-like charges in input, only one proton-like charge in output)
law of the conservation of baryon number
Using the term “total baryon number” to mean the total of the protons and neutrons (minus the total of the antiprotons and antineutrons), the total baryon number of the stable outputs of a particle collision must be the same as this total was in the inputs of the collision proton + protonproton +neutron + positron+electron neutrino (total baryon number of 2 in inputs, total baryon number of 2 in the outputs) proton + neutronproton+muon + antimuon (total baryon number of 2 in inputs, total baryon number of 1 in the outputs)
law of the conservation of lepton number (electron number “flavor,” there also being “flavors” of the law for muons and tau particles)
Considering electrons and electron neutrinos to have an electron number of 1, and considering a positron and anti-neutrinos (including the anti-electron neutrino) to have an electron number of -1, the sum of the electron numbers in the outputs of a particle collision must be the same as this sum was in the inputs of the collision neutron→proton
+electron+anti-electron neutrino (total electron number of inputs is 0, net electron number of outputs is 0)
neutron→proton
+electron (total electron number of inputs is 0, but net electron number of outputs is 1)

Each of the examples given here of allowed particle collisions is only one of the many possible outputs that might be influenced by the laws above. When you have very high-energy particles colliding, many output particles can result (and nature's burden in following all these laws becomes higher).

Now let us consider a very interesting question: does nature require something special to fulfill these laws – perhaps something like ideas or computation or figure-juggling or rule retrieval?

In the case of the first of these laws, the law of the conservation of mass-energy, it does not seem that nature has to have anything special to fulfill that law. The law basically amounts to just saying that substance can't be magically multiplied, or saying that mass-energy can't be created from nothing.

But in the case of the law of the conservation of charge, we have a very different situation. To fulfill this law, it would seem that nature requires “something extra.”

First, it must be stated that what is called the law of the conservation of charge has a very poor name, very apt to give you the wrong idea. It is not at all a law that prohibits creating additional electric charges. In fact, when two protons collide together at very high speeds at the Large Hadron Collider, we can see more than 70 charged particles arise from a collision of only two charged particles (two protons). So it is very misleading to state the law of the conservation of charge as a law that charge cannot be created or destroyed. The law should be called the law of the conservation of net charge. The correct way to state the law is as I have stated it above: the ratio between the proton-like charges (in other words, positive charges) and the electron-like charges (in other words, negative charges) in the outputs of a particle collision must be the same as the ratio was in the inputs of the collision.

This law, then, cannot work by a simple basis of “something can't be created out of nothing.” It requires something much more: apparently that nature have something like a concept of the net charge of the colliding particles, and also that it somehow be able to figure out a set of output particles that will have the same net charge. The difficulty of this trick becomes apparent when you consider that the same balancing act must be done when particles collide at very high speeds, in a collision where there might be more than 70 charged output particles.

I may also note that for nature to enforce the law of the conservation of charge (more properly called the law of the conservation of net charge), it would seem to be a requirement that nature somehow in some sense “know” or have the idea of an abstract concept – the very concept of the net charge of colliding particles. The “net charge" is something like “height/weight ratio” or “body mass index,” an abstract concept that does not directly correspond to a property of any one object. So we can wonder: how is it that blind nature could have a universal law related to such an abstraction?

In the case of the law of the conservation of baryon number, we also have a law that seems to require something extra from nature. It requires apparently that nature have some concept of the total baryon number of the colliding particles, and also that it somehow be able to figure out a set of output particles that will have the same total baryon number. Again we have a case where nature seems to know an abstract idea (the idea of total baryon number). But here the idea is even more abstract than in the previous case, as it involves the quite abstract notion of the total of the protons and neutrons (minus the total of the antiprotons and antineutrons). This idea is far beyond merely a physical property of some particular particle, so one might be rather aghast that nature seems to in some sense understand this idea and enforce a universal law centered around it.

The same type of comments can be made about the law of the conservation of lepton number. Here we have a law of nature centered around a concept that is even more abstract than the previous two concepts: the notion of electron number, which involves regarding one set of particle types (including both charged and neutral particles) as positive, and another set of particle types (including both charged and neutral particles) as negative. Here is a notion so abstract that a very small child could probably never even hold it in his or her mind, but somehow nature not only manages to hold the notion but enforce a law involving it whenever two particles collide at high speeds.

The examples of particle collisions given in the table above are simple, but when particles collide at very high speeds, the outputs are sometimes much, more complicated. There can be more than 50 particles resulting from a high-speed proton collision at the Large Hadron Collider. In such a case nature has to instantaneously apply at least five laws, producing a solution set that has many different constraints.

For historical reasons, the nature of our current universe depends critically on the laws described above. Even though these types of high-speed relativistic particle collisions are rare on planet Earth (outside of particle accelerators used by scientists), these types of particle collisions take place constantly inside the sun. If the laws above were not followed, the sun would not be able to consistently produce radiation in the way needed for the evolution of life. In addition, in the time immediately after the Big Bang, the universe was one big particle collider, with all the particles smashing into each other at very high speeds. If the laws listed above hadn't been followed, we wouldn't have our type of orderly universe suitable for life.

By now I have described in some detail the behavior of nature when subatomic particles collide at high speeds. What words best describe such behavior? I could use the word “fixed” and “regular,” but those words don't go far enough in describing the behavior I have described.

The best words I can use to describe this behavior of nature when subatomic particles collide at very high speeds are these words: programmatic and conceptual.

The word programmatic is defined by the Merriam Webster online dictionary in this way: “Of, relating to, resembling, or having a program.” This word is very appropriate to describe the behavior of nature that I have described. It is just as if nature had a program designed to insure that the balance of positive and negative charges does not change, that the number of protons plus the number of neutrons does not change, and that overall lepton number does not change.

The word conceptual is defined by the Merriam Webster online dictionary in this way: “Based on or relating to ideas or concepts.” This word is very appropriate to describe the behavior of nature that I have described. We see in high-speed subatomic particle collisions that nature acts with great uniformity to make sure that the final stable output particles are one of the five types of particles in the list above (protons, neutrons, photons, electrons, and neutrinos). It is just as if nature had a clear idea of each of these things: the idea of a proton, the idea of a neutron, the idea of a photon, the idea of an electron, and the idea of a neutron. As nature has a law that conserves net charge, we must also assume that nature has something like the idea of net charge. As nature has a law that conserves baryon number, we must also assume that nature has something like the idea of baryon number. As nature has a law that conserves lepton number, we must also assume that nature has something like the idea of lepton number.

This does not necessarily imply that nature is conscious. Something can have ideas without being conscious. The US Constitution is not conscious, but it has the idea of the presidency and the idea of Congress.

So given very important and fundamental behavior in nature that is both highly conceptual and highly programmatic, what broader conclusion do we need to draw? It seems that we need to draw the conclusion that nature has programming. We are not forced to the conclusion that nature is conscious, because an unconcious software program is both conceptual and programmatic. But we do at least need to assume that nature has something like programming, something like software. 

Once we make the leap to this concept, we have an idea that ends up being very seminal in many ways, leading to some exciting new thinking about our universe. Keep reading this blog to get a taste of some of this thinking. 

Tuesday, February 4, 2014

Hypothetical Configurations of a Programmed Cosmos

In some previous posts (here, here,  and here) I have advanced the theory that the universe has a cosmic computation layer: a kind of primordial programming that has existed since the known beginning of the universe (the Big Bang). I argued that we need to assume such a layer to account for how the universe manages to fulfill so ably all of its near-infinite computation needs. I also argued that we need to postulate such a computation layer to help explain the astonishing evolution of the universe, the appearance of galaxies, life, and eventually Mind from a universe that began in a superdense state. The theory I have suggested is that our universe is quite real (not a simulation), but that it has been somehow programmed for success from the beginning. To anyone reading my previous post on 18 anthropic requirements that must be met for civilizations such as ours to exist (some of which are most unlikely to occur by chance), such a theory of cosmic programming may seem like a good idea.

Upon hearing such a theory of a cosmic computation layer, some readers no doubt must have objected: we can't believe such a thing, because we can't imagine how such a thing might work; we can form no idea of what type of configuration such a computation layer might have.

My purpose in this post is to rebut such an objection. I will argue that there are hypothetical configurations we can imagine that might allow such a computation layer to exist. My aim is not to show that any one of these configurations is likely, but merely to show that we can imagine various hypothetical configurations for a cosmic computation layer, none of which violates any known findings. I will ask the reader to allow me to engage in some speculation, which may at times get rather exotic. Before dismissing such speculation, please remember a quote from the famous physicist Niels Bohr: “Your theory is crazy, but it's not crazy enough to be true” (a reminder that nature often ends up favoring some pretty mind-bending exotic realities).

First, let me comment on the use of the term “layer.” By speaking of a computation layer, I am not speaking of anything very similar to the layer of a cake. I use the term layer in the same way that software architects use the term, to mean a certain type of functionality that exists in a system, regardless of its physical location (in the same way that such architects speak of an abstraction layer). Computer people might talk of a hardware layer, a driver layer, and a software layer, but they don't actually mean things that are lying on top of each other horizontally like the layers of a cake. The term layer is simply used to mean some particular aspect of the overall functionality, regardless of where it is located. Do a Google image search for “software layer” and you will see many examples.

When I depicted a diagram (in this post) showing a computation layer underneath a mass-energy layer, I stated that the two layers are intermingled or intertwined (and certainly did not mean that one layer was vertically floating over the other).

So what type of arrangement or configuration might allow for such a computation layer to exist? At this stage of our ignorance, we can only speculate. But it is possible to imagine some reasonable configurations that would allow for such a thing.

The Possibility of Invisible Computation Particles

One possibility we can imagine is that the universe may have two types of particles: primary particles such as protons, neutrons, electrons, and photons, and also what we may call computation particles. The purpose of the computation particles might be to facilitate computation related to the primary particles, and to make sure that the universe's programming is followed. There could be at least one computation particle for every primary particle, and there might be many computation particles for each primary particle. The computation particles might somehow shadow or surround the primary particles. Each computation particle might be able to store many bits of information.

The immediate objection one could make is: such particles couldn't possibly exist, because we would have already detected them. But this objection isn't valid in light of current theories about dark matter. Currently physicists say that we are all surrounded by invisible dark matter. They say that we can't see dark matter because it does not interact with electromagnetism (and thus far there have been no unambiguous detections of dark matter). If such a thing is possible, it is possible that there are other types of invisible particles that do not interact with any of the four fundamental forces of our universe, and that are completely undetectable to us through direct observation.

We are not 100% sure that dark matter really exists, but we are absolutely sure that a particle called the neutrino exists. The neutrino has been called the ghost particle. A neutrino has either no mass or very little mass. Neutrinos are emitted by the sun. Scientists say that every second countless neutrinos are passing through your body. Given the reality of such particles, there is nothing implausible about the idea that our bodies and other objects might be intermingled with countless trillions of computation particles we can't see or detect.

We know of one other thing that pervades all of space: the cosmic background radiation, believed to be the faint afterglow of the Big Bang. All of outdoor space is bathed in this faint radiation, which was only detected around 1965. Stand outside and you will be surrounded by the tiny particles of the cosmic background radiation. Then there is also dark energy, which scientists say now makes up about 68% of the universe's mass-energy. It seems that as time passes, scientists are finding more and more cases of where we can say, “We are surrounded by a type of invisible matter or energy we were not aware of previously.” So there is nothing implausible about the idea that we might also be surrounded by (and intermingled with) computation particles we can't see.

The computation particles I am postulating could either be some type of invisible particle different from dark matter or dark energy, or the computation particles might actually be dark matter or dark energy (or part of either of them). Since we know nothing about how massive or complex dark matter particles might be, we can't rule out that they may be computation particles (or that they may partially be computation particles). We can say the same thing about the dark energy particles postulated by scientists – some of those particles may be computation particles. 


invisible particles
 
The Possibility of Emergence Clouds

The term emergence has been used for the tendency of nature to create units that are more than the sum of their parts. One example of emergence is the appearance of life. First you have mere chemicals, and then later there develops a microscopic living thing that is much more than just a combination of chemicals. Another example of emergence is the appearance of conscious Mind. First you have a collection of cells, and then you have a self-aware consciousness that is much more than just a collection of cells.

If we imagine some type of computation particles as previously imagined, we can imagine some of them grouping together in clusters, related to the emergence of some particular thing that is more than just the sum of its parts. Every atom might be associated with an emergence cloud that handles computation related to that particular atom. Every molecule might be associated with its own emergence cloud. There might also be emergence clouds associated with the origin of life, the origin of Mind, and the origin of galaxies.

Your mind might itself be an emergence cloud, a cluster of computation particles that stays together in order for your consciousness to exist. Such an emergence cloud may or may not dissipate when you die.

In this hypothetical configuration, small emergence clouds can exist within larger emergence clouds. The smallest emergence clouds might be the size of atoms or molecules, and the largest emergence clouds might be the size of galaxies or clusters of galaxies.

The Possibility of Hyperluminal Computational Communication

Physicists say that known physical particles such as protons exchange photons as part of the electromagnetic force, with one particle having an influence on another particle. Thinking in a similar vein, we can imagine that computation particles might be able to somehow communicate with other computation particles.

Would such communication be limited by the speed of light? Not necessarily. The speed of light is the speed of all electromagnetic radiation. But the communication between computation particles might use some different type of radiation or energy that is not limited by the speed of light.

Physicists say that known physical particles such as protons both send and receive virtual particles that act as agents of force exchange. So it is therefore not implausible to imagine that if computation particles exist, they might be both senders and receivers of computation-related messages. Under such a scenario, we can imagine each such particle as being rather like a radio receiver and a radio transmitter.

Given a sufficient number of such computation particles scattered around space, communicating with each other at a speed that is perhaps greater than the speed of light, and perhaps instantaneous, you have all the requirements for a computing system of basically unlimited power.

Would There Be Room for Such Particles?

Let's consider: are there any spatial reasons why it would be implausible to assume that there might be one or many computation particles for each material particle? Could it be that things would be too crowded if such particles existed? Certainly not. Scientists tell us that solid matter is almost entirely empty space. You often see schematic diagrams showing electrons as being a substantial fraction of the size of an atom, but such schematic diagrams are very misleading in their spatial depictions. In reality, according to this site the ratio of the radius of an atom to the radius of a proton, neutron, or electron is between 10,000 and 100,000. An atom is almost entirely empty space. So there is a huge amount of empty space within atoms in which computation particles might exist. There might be 1000 computation particles for every proton in an atom, and there still would be enough space within an atom.

The Possibility of a Computation Field

Another possibility is the possibility of a kind of universal computation field, something perhaps rather comparable to the Higgs field. Scientists say the Higgs field is a field that pervades all of space. So we can imagine a computation field that might pervade all of space, helping the universe to satisfy its computation needs. Such a field might act somewhat like a wi-fi network, but might extend to every bit of space.

Just as some physicists depict the creation of a particle as being a kind of disturbance or flicker in a field such as the Higgs field, we might imagine that each computation event in the universe's computation might be a kind of disturbance, flicker or blip in a universe-wide computation field, with the field having innumerable such blips, like a bubbling, boiling ocean.

In the visual below we can imagine the purple grid as being this computation field, with the green grid below it being space that is warped by the presence of matter. However, if such a computation field existed it might better be depicted as pervading all of space.

computation field


Computation Threads in the Fabric of Space?

Still another possibility is the possibility that computation functionality is somehow embedded in the fabric of space. Think of space as being a kind of fabric (a way it is often described). Imagine that this fabric is built from tiny units we may call threads. It could be that every nth thread (every hundredth, every thousandth, every millionth, or some other fraction) is what we might call a computation thread – a unit that helps the universe perform its computation activities. Each such thread might be of a vast length, perhaps stretching for trillions of miles.

Would we be able to detect such a thread as we passed through space? Probably not, largely because ordinary solid matter is something like 99.999% empty space. Astronomers say that stars as big as the sun are sometime crunched into the densest possible state (short of a black hole), and that when the star reaches such a state (called a neutron star), every teaspoon of matter weighs 100 million tons. This shows how empty ordinary matter is. So ordinary matter could pass through space that partially consisted of computation threads. The chance of a collision between such a thread and a material particle would be very low, and a collision might only produce a tiny deflection which would be very hard to detect. Or perhaps a particle of solid matter might be able to pass through such a computation thread without any deflection at all, like a person moving through air.


computation thread

Where Might the Universe's Software be Stored?

So we have imagined how a computation layer could exist, either (a) in the form of computation particles which might cluster into emergence clouds, and which might communicate between each other, perhaps at speeds greater than the speed of light, or (b) a computation field that pervades all of space, or (c) embedded as threads within the fabric of spacetime. But what about the software that would be a vital element of any cosmic computation layer—where might that be located?

I can imagine several possibilities. One is the possibility that such software might somehow be stored as information content within a universal computation field, something similar to the Higgs field. The second possibilitiy is that the software might somehow be lurking within the cosmic background radiation that pervades all of the universe, or within some similar all-pervading radiation that dates from the time of the Big Bang. The photons that we can detect from the Big Bang are microwave photons. But space might also be pervaded by equally ancient particles of some other type, which somehow store the universe's software or some important part of it. If such particles can travel through any solid matter in the same way that neutrinos can, then any particle could “query” the universe's software just by taking a read of this background radiation (in rather the same way that your GPS device gets your current position partially by taking a read from a GPS satellite).

Another possibility is that the software might somehow be stored within the previously imagined computation threads embedded in the fabric of space.

One other mind-bending possibility is suggested by DNA biology. When a human is conceived, an organism does not get the blueprint for a human being from some non-human external source. Instead it reads the blueprint of a human being stored in every tiny little DNA molecule. Every drop of your blood or saliva is teeming with such molecules. In your cells are trillions of copies of the blueprint for how to make a human. This suggests the following possibility: perhaps the software of the universe (or some vital kernal or core of it) is stored in every computation particle (or perhaps every known subatomic particle). In such a case a proton (or a computation particle) might have no need to query any external source for a guideline on how to behave in accordance with the universe's programming. It might merely retrieve the information from itself.

Just as every cell in your body contains DNA that stores the plan and blueprint of a human being, your body might have within it countless trillions of computation particles that each is storing the plan and blueprint of the universe--and perhaps programming that will assure the glorious future pinnacles of cosmic destiny.

Conclusion

It is far too soon to draw any exact conclusions about the details of a cosmic computation layer. In this regard we have a situation similar to the situation biology was in during the middle of the 19th century. At that time someone might have reasoned that there must be some information system that allows the blueprint of a human to be passed on during conception. But at that time it would have been impossible to have figured out the details of how such a system worked. We only learned the details with the discovery of DNA in the twentieth century. Similarly we can make compelling arguments that we need to assume that some cosmic computation layer exists, but we cannot say at this time what the exact configuration of a cosmic computation layer might be. We can merely speculate.

But I think the type of speculations made here show that we can easily imagine ways in which a cosmic computation layer might plausibly exist. So the idea that the universe has a computation layer is quite possible, and cannot be excluded because of any “we can't think of any way that could work” type of reasoning. We can indeed think of quite a few ways in which it might work, and I have described some of those possible configurations, as rough as those ideas may be.

Of course, a mere possibility does not show a likelihood. But I think the likelihood of the universe having a computation layer can be shown based on the need to satisfy the enormous computation demands of the universe (as I have argued here), and on the need to postulate a teleological principle to explain the universe's remarkable evolution from infinite density to galaxies to life and finally to Mind (as I argued here). Many a modern physicist recognizes that the universe seems to have a high degree of fine-tuning, for reasons discussed here and here. Some of these physicists have tried to explain fine-tuning by imagining a multiverse (a collection of a vast number of universes). But we can explain the fine-tuning much more simply and economically with the hypothesis that our material universe has been programmed for success from the beginning.  

Friday, January 31, 2014

Why We Shouldn't Exist: A Table of 18 Anthropic Requirements

Cross-posted from www.futureandcosmos.blogspot.com

The Standard Model is regarded as a highly “unnatural” theory. Aside from having a large number of different particles and forces, many of which seem surplus to requirement, it is also very precariously balanced. If you change any of the 20+ numbers that have to be put into the theory even a little, you rapidly find yourself living in a universe without atoms. This spooky fine-tuning worries many physicists, leaving the universe looking as though it has been set up in just the right way for life to exist.
Harry Cliff, Particle Physicist, in a Scientific American article

If you have not read much on the topic of the anthropic principle and the issue of possible fine-tuning in the universe, it may be hard to follow the topic. Discussions typically involves subatomic physics, cosmology, biology, evolution and some other subjects that don't exactly make light reading. I think that the topic will be easier to understand if we condense it into one simple table that summarizes the most relevant facts. I have created such a table, which appears below.

The left column of the table lists various items that appear in nature. The right column lists requirements of those items. The table is in chronological order. It starts out with requirements that must be met in the very beginning, near the time of the Big Bang, if the universe is ever going to end up with people like us, inhabitants of a technical civilization living near a sunlike star. Towards the end of the table are items that appeared billions of years later in time. The final item in the table is “Civilizations near sunlike stars.” It is interesting that for the last item to come into existence, all of the previous items in the list must previously come into existence. I have added color coding which makes the various interlinked dependencies much easier to follow. 


Anthropic Principle
 
Click to Expand

I will now explain why each item has the requirements I have listed.

Row 1 (Higgs field): The Higgs field (related to the Higgs boson) is said to give mass to other particles. Scientists are puzzled by why the Higgs field has the strength it has, and they say that it seems to require fine-tuning to 15 decimal places. This is a problem called the hierarchy problem or the naturalness problem. It is discussed in this scientific paper entitled The Higgs: so simple yet so unnatural. As a Daily Galaxy article put it, “Using theory as it currently stands, the mass of the Higgs boson can only be explained as the result of a random fine-tuning of the physical constants of the universe at a level of accuracy of one in one quadrillion.”

Row 2 (up quarks and down quarks, electrons): The particles in the nuclei of atoms (protons and neutrons) are made up of smaller particles called up quarks and down quarks. A requirement of the large-scale existence of up quarks and down quarks (and also electrons) is what scientists call matter/antimatter asymmetry (a situation where matter is vastly more abundant than antimatter). This is a puzzle to scientists, because the standard model of physics seems to predict that matter and antimatter should have existed in equal amounts at the time of the Big Bang, which would have caused both types of particles to collide with each other and convert into energy, leaving almost nothing but energy in the universe. A requirement for electrons is the Higgs field, and on this page a physicist says that the electron would not have mass without the Higgs field. 

Row 3 (protons, neutrons): The simple requirement is that there be up quarks and down quarks, discussed in the previous paragraph.

Row 4 (hydrogen atoms): The requirement for a hydrogen atom is that you have one proton and one electron, and also the electromagnetic force, the force of attraction between a proton and an electron. Without that force, electrons would not have any tendency to orbit a nucleus.


Row 5 (galaxies): Galaxies are huge collections of stars. There are many requirements for the formation of galaxies after the Big Bang. The universe had to begin with a fine-tuned expansion rate, as a slighter higher rate would have caused an expansion too fast for galaxies to form, and a slightly slower rate would have caused all matter to collapse into superdense black holes. Scientists also say that numerous other things had to be just right (the other items listed in this row). One requirement is primordial density perturbations greater than .000001 and less than .0001, as explained here. One particularly severe requirement seems to involve dark energy, which is regarded pretty much the same as the cosmological constant. Cosmologists conclude that the level of dark energy seems to have been fine-tuned to something like 1 part in 1060 or one part in 10120. The issue, called the vacuum catastrophe, has been fretted over by many physicists. This paper refers to the “tremendous, unsolved naturalness problem” posed by the cosmological constant.
 
Row 6 (carbon atoms): This row refers to the abundant existence of carbon atoms, something which ends up having lots of requirements. Besides the previously mentioned requirements for the hydrogen atom (protons, electrons, and the electromagnetic force), there are the additional requirements of the neutron and the strong nuclear force (the two of them allow you to have a carbon nucleus that holds together, despite the mutual repulsion between the protons). There is also the requirement that you have a law of nature called the Pauli Exclusion Principle, something that is quite necessary for both solid matter and complex carbon bonds. Then there is an additional requirement for something called nuclear resonances, which assures that carbon is produced in abundant quantities by stars through a process called the triple alpha process. Without this additional requirement, there would not be enough carbon (which wasn't produced in the Big Bang). This point has been widely discussed by scientists such as Hoyle, and in this scientific paper stating that a 0.4% change in one parameter would have left us without a universe abundant in both carbon and oxygen. An additional requirement that I had no space to list in my table is the requirement that the neutron mass be higher than the proton mass.

Row 7 (oxygen atoms): Oxygen atoms have all the same requirements of carbon atoms, including the same special requirement involving nuclear resonances, necessary for oxygen to be produced by stars in abundant amounts. The scientific paper here argues that there would not be much oxygen without the weak nuclear force, so I have also listed that as a requirement. 

Row 8 (Heavier atoms): By heavier atoms I mean all atoms than have more than about 25 protons (which includes copper, lead, silver, gold, zinc, tin, and probably also iron). These types of atoms have most of the same requirements of carbon atoms and oxygen atoms, except that to have these atoms in abundance you don't need nuclear resonances but instead the stellar explosions called supernovae explosions (explosions of stars that produce heavy elements such as lead and iron). These supernovae explosions require a tiny particle called the neutrino and a force called the weak nuclear force.

Row 9 (Sunlike stars): I may define sunlike stars as those that are white, yellow, or orange (or some combination of those colors). Sunlike stars require galaxies (since if galaxies had not formed, there would be no stars). Sunlike stars also require a very delicate fine-tuning of some of the most fundamental constants of nature. The physicist Paul Davies says on page 73 of The Accidental Universe: “If gravity were very slightly weaker, or electromagnetism very slightly stronger (or the electron slightly less massive relative to the proton), all stars would be red dwarfs. A correspondingly tiny change the other way, and they would all be blue giants.” Blue giants are too-short lived for life to evolve near them, and red dwarf stars are not believed to be as favorable for life's evolution as sunlike stars. 

Row 10 (water): Water requires oxygen atoms and hydrogen atoms, as we can tell from its formula H20. Because of its remarkable features that make it unique among liquids, there are probably additional requirements for water, but I haven't listed them.

Row 11 (stable planets): One requirement for stable planets is gravitation, the force that holds planets and stars together. But there is another very interesting requirement: that the electric charge of the proton exactly match the electric charge of the electron, to many decimal places. Electromagnetism (the fundamental force involving electric charges) is roughly 1036 times stronger than gravitation, the weakest of the fundamental forces by far. Consequently a very slight mismatch between the charge of the electron and the proton would cause electromagnetism (roughly a trillion trillion trillion times stronger than gravitation) to completely overwhelm the gravity holding the planet together. In his book The Symbiotic Universe, astronomer George Greenstein (a professor emeritus at Amherst College) says this about the equality of the proton and electron charges: "Relatively small things like stones, people, and the like would fly apart if the two charges differed by as little as one part in 100 billion. Large structures like the Earth and the Sun require for their existence a yet more perfect balance of one part in a billion billion." In fact, experiments do indicate that the charge of the proton and the electron match to eighteen decimal places. 


proton electron charge
A curious coincidence

Row 12 (nucleotides): Nucleotides are molecules that are the building blocks of RNA and DNA, molecules essential for life. Nucleotides require three types of atoms mentioned above (carbon, oxygen, and hydrogen atoms), as well as phosphorus atoms. They also require physics to be arranged in a way that allows for atoms to combine to make molecules consisting of multiple atoms.

Row 13 (genetic code): The genetic code is a semantic framework used by DNA and RNA, one in which particular combination of nucleotides stand for particular amino acids. The genetic code could roughly be called the software used by DNA and RNA. The origin of this code is one of science's great mysteries. We do not know how this code (required for all biological evolution) appeared from mere chemicals. This is the “code from chemicals” problem described in this blog post.

Row 14 (RNA): RNA is one of the two main molecules used by all living things, and it is believed to have preceded the more well-known and more complicated molecule DNA. It requires nucleotides (from which RNA is built), as well as the genetic code and water (as a substrate).

Row 15 (DNA): DNA requires nucleotides (from which it is built), as well as the genetic code and water. I also list RNA as a requirement since it is believed that RNA was a necessary predecessor of DNA.

Row 16 (Proteins, cells): Proteins are made by DNA and RNA using the genetic code. Requirements include water and amino acids (which I didn't list in the table for space reasons).

Row 17 (Photosynthesis): Photosynthesis is the process by which plants convert sunlight to chemical energy. Recent studies suggest that photosynthesis uses exotic quantum effects.

Row 18 (Civilizations near sunlike stars): Now we come to the last and most important row, which mentions civilizations such as our civilization. There are many requirements for such a civilization. All of the items on the 17 previous rows on the table are indirect or direct requirements of civilizations near sunlike stars. The well-understood direct requirements of such civilizations are heavier atoms (needed so that the civilization can have the metals needed for technology), sunlike stars, stable planets, proteins, cells, and photosynthesis (the last one being necessary even if the beings in a civilization ate nothing but meat, because they would still rely on a food chain that would require photosynthesis).

There is also a poorly understood requirement that does not occur previously on the list – the requirement that somehow unconscious matter can produce Mind of the type that humans have. That is a huge additional requirement. It may require additional laws of nature, or perhaps exotic features of quantum mechanics, as Penrose and Hameoff have suggested. Penrose and Hameroff say here that their “orchestrated objective reduction” theory “suggests that there is a connection between the brainʼs biomolecular processes and the basic structure of the universe,” and yesterday's news reported a finding that supports such a theory. The matter is still undecided, but we do seem to have a huge additional requirement from nature in order to go from mere cellular life to conscious life.

The table I have created illustrates the great number of intertwining requirements needed for the universe to be consistent with the eventual appearance of civilizations such as ours. A huge amount of fine-tuning is required to meet these requirements, most notably in rows 1, 5, 9, and 11, each of which require “1 in a trillion” type coincidences with a very low likelihood of randomly occurring, We also have the very mysterious requirements of rows 13 and row 18, both of which almost seem to require “blood from stone” type of requirements (row 13 involving the origin of the genetic code from chemicals, and row 18 involving the origin of human-like consciousness from mere matter).

The severe improbability of accidentally meeting all these requirements by chance is the reason I have entitled this blog post “Why We Shouldn't Exist: A Table of 18 Anthropic Requirements.” We can say that we shouldn't exist, in the sense that our existence seems to require an almost miraculous conspiracy of conveniences, coincidences and fine-tuning within nature. As Stephen Hawking and Leonard Mlodinow said in their book The Grand Design (page 161), The laws of nature form a system that is extremely fine-tuned, and very little in physical law can be altered without destroying the possibility of the development of life as we know it.”

Friday, January 17, 2014

The Origin of Life: Programmatically Predestined?

Cross-posted from www.futureandcosmos.blogspot.com
 
Let us now look at one of the great mysteries of the universe, the mystery of the origin of life, something that took place more than three billion years ago.

Some readers may be thinking along these lines: That's not such a mystery. Given a primordial soup and millions of years of time, there developed some self-replicating molecule. Once you had that, the development of everything else was just a case of things evolving from the simple to the more complex.

But such a glib explanation glosses over the great difficulties involved in explaining the origin of life on the early Earth. The fact is that there are huge difficulties in explaining how life began on our planet billions of years ago. In recent decades scientists have made relatively little progress in solving this problem.

Consider the progress of astronomy during the past 50 years. Since the year 1963 we have seen the discovery of the Big Bang, the discovery that the expansion of the universe is accelerating, and the discovery of more than 1000 extrasolar planets. But without doing a Google search, can you name one bit of progress that has been made in the past 50 years regarding the origin of life? You probably can't. When most of us think of scientific work on the origin of life, we think back to the Miller experiments involving amino acids, but they were done in the 1950's.

We can divide up the problem of the origin of life into three different problems: a necessary components problem, a combinatorial problem, and a computation problem.

The Necessary Components Problem

The basic units of life (below the cellular level) are things such as RNA, DNA, and proteins. Proteins are made of building blocks called amino acids. Some proteins are extremely complicated molecules built from very many amino acids. It was calculated long ago that the chance of some of these proteins forming from random combinations of amino acids is incredibly low, even given billions of years. But that's not necessarily a problem, because proteins are formed using the instructions in DNA. A DNA molecule is like a library of recipe books, with each of the recipes being a recipe for making a particular type of protein.

So if there is a mechanism for producing DNA from a chance combination of chemicals, we might have a way of explaining how all those complicated proteins came into existence. Unfortunately it seems DNA molecules appear to be way too complicated to have arisen from a chance combination of their constituent elements of nucleotides (which consist of sugars, phosphates, and nitrogenous bases), without assistance from something more complicated than nucleotides.

So the current leading hypothesis is that the first self-replicating molecule was not DNA but something simpler, presumably some version of RNA. This idea is called the RNA World hypothesis. The idea is that first there was RNA, and that DNA evolved later. However, the RNA World hypothesis is on shaky ground.

One problem is the difficulty of explaining the origin of all the necessary building blocks. The table below shows the various types of building blocks. As indicated below, there are reasons for doubting that the ribose sugars, purines, and nucleotides would have existed in sufficient quantity for DNA or RNA to originate.



Components
RNA, DNA
Ribose sugars Harvard science web site: "In experiments ribose could not be made at the necessary quantities that would explain its abundance on early Earth because it was highly unstable."

Wikipedia article: some scientists have concluded that "the backbone of the first genetic material could not have contained ribose or other sugars because of their instability."
Phosphates

Pyrimidines (type of nitrogenous base) 2009 paper (Powner et. al) suggests possible path for abiotic origin.
Purines (type of nitrogenous base) More complex than pyrimidines. According to this paper, hard to explain abiotic origin, in a way compatible with formation of ribose sugars.
Nucleosides (combination of ribose sugar and pyrimidines or purines) and nucleotides (a nucleoside plus a phosphate) Wikipedia article:No known chemical pathways for the abiogenic synthesis of nucleotides from pyrimidine nucleobases cytosine and uracil under prebiotic conditions."


Proteins
Amino Acids Found in a meteorite. Miller-Urey experiment produced amino acids from gases and continuous electricity.




The Combinatorial Problem

The combinatorial problem is the problem of getting anything like RNA or DNA to appear from the building blocks listed above. This scientific paper by Joyce and Orgel refers to the difficulty of joining together nucleosides (a combination of ribose sugar and pyrimidines or purines) and nucleotides (a nucleoside plus a phosphate). The wikipedia article on the RNA World hypothesis notes that “Joyce and Orgel further argued that nucleotides cannot link unless there is some activation of the phosphate group, whereas the only effective activating groups for this are 'totally implausible in any prebiotic scenario', particularly adenosine triphosphate.”

Well-known scientist Freeman Dyson has stated, “The results of thirty years of intensive chemical experimentation has shown that prebiotic synthesis of amino acids is easy to simulate in a reducing environment, but prebiotic synthesis of nucleotides is difficult in all environments...If it happened, it happened by some process that none of our chemists have been clever enough to reproduce.”

RNA is made of nucleotides, which are made of ribose sugar, phosphates, pyrimidines, and purines. Scientists have not been able to synthesize RNA through a simulation of the early earth, and in such simulations have not been able to make the simpler nucleotides either. As discussed in the table above, there are difficulties in assuming the availability of even some of the building blocks of the building blocks of RNA.

The Computational Problem

Perhaps the biggest problem involving the origin of life is the problem of accounting for the origin of the genetic code. The genetic code is a symbolic representation system used by all earthly life. It has been called a kind of miniature programming language. 

 The Genetic Code

It is fairly easy to explain the basics of how the code works. In the spiral staircase structure of the DNA molecule, the “steps” of the staircase are chemicals called nitrogenous bases: either purines (adenine or guanine) or pyrimidines (cytosine or uracil). Various combinations of three of these chemicals stand for different amino acids (the building blocks of proteins). For example, if there are three consecutive “steps” in the spiral staircase, and the first is cytosine, the second adenine, and the third guanine, that stands for the amino acid glutamine. There are 63 other cases where a sequence of three nitrogenous bases stands for a particular amino acid. (In the diagram above, the chemicals around the four edges of the square are the amino acids.)

Imagine if you liked to write down recipes, but you needed to write down many of them on a single piece of paper. You might invent a little “recipe language” in which MK1 stands for a half a cup of milk, MK2 stands for a full cup of milk, FL1 stands for a half a cup of flour, and so forth, with a total of 64 different three-character symbols (and some other characters standing for “end of recipe”). You might then write out recipes very concisely using this little language. That's quite similar to what the genetic code does, except the recipes are stored in the DNA molecule, and the recipes are instructions for making proteins from the building blocks of amino acids.

The big question is: how did this genetic code ever originate? It's hard to imagine it arising through anything like Darwinian evolution, as the genetic code seems to be required from the very beginning of biological evolution.

The genetic code can be considered an example of code, the term software developers use for the symbolic instructions they create. The baffling question is: how did nature go from chemicals to code? Code seems like something fundamentally different from chemicals, and the two seem as unrelated as an apple is to a bicycle.

The issue was highlighted by a paper by biologists J.T. Trevors and D.L. Abel:

"Peer-reviewed life-origin literature presupposes that, given enough time,
 genetic instructions arose via natural events. Thus far, no paper has provided
 a plausible mechanism for natural-process algorithm-writing...There is an
 immense gap from prebiotic chemistry and the lifeless Earth to a complex DNA  instruction set, code encryption into codonic sequences, and decryption
 (translation) into amino acid sequences...How did inanimate nature write
(1) the conceptual instructions needed to organize
metabolism?
(2) a language/operating system needed to symbolically
represent, record and replicate those instructions?
(3) a bijective coding scheme (a one-to-one correspondence
of symbol meaning) with planned redundancy
so as to reduce noise pollution between triplet codon
‘‘block code’’ symbols (‘‘bytes’’) and amino acid
symbols?
We could even add a fourth question. How did
inanimate nature design and engineer
(4) a cell [Turing machine? (Turing, 1936)] capable of
implementing those coded instructions?" -- Trevors and Abel


In this article the widely read physics professor Paul Davies has discussed other difficulties in the “code from chemicals” scenario, the assumption that the genetic code arose from some kind of chemical evolution: 

"The language of genes is digital, consisting of discrete bits, cast in the language of a four-letter alphabet. By contrast, chemical processes are continuous. Continuous variables can also process information – so-called analogue computers work that way – but less reliably than digital. Whatever chemical system spawned life, it had to feature a transition from analogue to digital. The way life manages information involves a logical structure that differs fundamentally from mere complex chemistry. Therefore chemistry alone will not explain life's origin, any more than a study of silicon, copper and plastic will explain how a computer can execute a program." -- Davies

This problem of the origin of the genetic code recently got even more difficult to explain, because scientists recently announced the discovery of a second genetic code buried in DNA. Apparently many of the triple sequences have a double-meaning. Explaining one genetic code was a nightmare -- how can we explain two of them?

A New Approach to the Origin of Life

We might get around these difficulties by imagining that the origin of life on Earth required external intervention by a divine agent or perhaps extraterrestrials. But that would raise the question: why should our ordinary little rock have deserved such a special blessing? After all, modern astronomy tells us that planets are as common as apples in an apple orchard.

A more intellectually attractive idea is the daring concept that the origin of life was programmatically predestined. We can boldly postulate that long, long before there arose the programming in the genetic code, there was a more general programming woven into the fabric of the universe, a programming that drives the evolution of the universe, causing the frequent occurrence of things that might otherwise have very little or no chance of occurring. Under such a scenario, we can think that life is appearing throughout the universe, because that is the way the universe is programmed to behave. Under such a concept, we no longer have to imagine the origin of the genetic code by supposing a farfetched case of “code from chemicals.” We can instead plausibly imagine the origin of the genetic code as a case of “code from code” – the genetic code being a product of a more general cosmic software that is influencing cosmic destiny, propelling the universe forward towards desirable outcomes.

I speak here of the theory explained in other posts on this site: the theory of a programmed material universe. For more details, see my post The Theory of a Programmed Material Universe and my post Nature's Computation Needs Imply a Programmed Material Universe.