Heirarchy of Intelligence

Heirarchy of Intelligence

The Hierarchy of Intelligence

Humans have the ability to design and build (create) machines in the macro world that do incredible things.  But humans cannot create life as we know it – our fingers are too big and we do not have the intelligence to design something that is as “smart” as we are.  Obviously this is an opinion of this engineer and is not shared by everyone.  But if this limitation is true, then the hierarchy is shown in Table 1, but more clearly in Figure 3, exists.

Figure 3. Hierarchy of Intelligence

Figure 3 can be opened up in a separate window by clicking on it.  Each box represents an entity.  Each column represents different forms of entities.  The left column lists the actualizations that occur in our universe – the physical reality that we experience.  The Universe entity box surrounds the header title to indicate that the actualizations are in the realm of the Universe and follow the laws of physics of the Universe.  It wraps around the human actualization processes because those actualizations occur in our universe.

We do not know where or how the “supreme actuator” actuated, so it is not shown as being “in our universe” which may or may not represent reality.

Figure 3 also assumes the “supreme creator” created the universe and solar system.  If one does not have that point of view, that the universe and solar system exist from natural causes, then remove the Universe and Earth creation boxes.  They have no bearing on the discussion herein and are not mentioned elsewhere on this site.  However, since we know from the proof above that life cannot be accounted for by natural causes, the create life and human boxes must remain unless some other explanation can be made for our existence other than a superior intelligent entity.

The three realms listed are Science (physics), Actualization (Actualization Processes), and Philosophy (intelligence/logic).  We already know about Science and Philosophy, Actualization is new.  It perhaps should not be considered a separate realm, rather a combination of Science and Philosophy, hence the arrows illustrating this connection.  It exists as an artifact because of the fact that intelligent manipulation of matter and energy makes it possible to create a whole new class of creations – all the kinds of entities that life, especially man, creates, but most important, life itself.

The realm of philosophy, in the form of information, logical intelligence, abstract intelligence, and supreme intelligence are inherent by the presence of these properties in the creators and the creations.  For this reason, color codes are used to signify levels of intelligence.  This shows that it took a supreme intelligence to create life and life with abstract intelligence, and it takes at least abstract intelligence to create entities with logical intelligence or with embedded information.

Another central theme is that all actualizations are actualized by an entity that uses machines doing intelligent work to execute a process designed by the actualizer. The term actualizer is used instead of the designer because to an engineer, the design process is just one task of many to create a product.

It is the view of this engineer that the term “design” by those in the field of biology covers all aspects of actualization (the process of bringing something into existence).  The design portion of creation is a mental process, which is not in the realm of physics, rather the realm of philosophy.  The actual building of the “something” is physical, is in the realm of physics.  Actualization is a process that merges the realms of philosophy and physics.  This is a fundamental understanding that is missing from the arguments put forward by both the Materialists and the Intelligent Design community.

From Intelligence to Actualized Entities

Philosophers have forever recognized the difference between living and nonliving things.  Living things contain much more detail and do many more things than natural causes are able to do.  However, today we are being taught that science can explain everything, there is no need to invoke a creator to explain the existence of life.  This prospect can be disproved if one can show scientifically that natural causes cannot produce (as opposed to design) the molecules we find in our life.1  This is the approach taken here – an engineer’s way of thinking about the problem.  An engineer finds out if his mental design, and the science behind it, are valid by the success or failure of converting a mental design to physical reality.

The Thermodynamic Impact of Intelligence

From a thermodynamic point of view, the addition of intelligence with physics has an additional energy cost, above all other thermodynamic considerations, just to be able to do intelligent work.  This expenditure exists as long the machines that are doing intelligent work are “running”.  The amount of energy actually consumed is a function of the design of the machine, how long it running, and how hard it is working.  There is no way to be able to relate entropy change due to intelligent work or to the amount of energy consumed.

In addition to the increased quantity of energy required, energy of a potential higher than the free energy of the system is required, for two reasons, one to do the intelligent work (like move something uphill) and two, to perform the logical functionality of the machine.

There is an energy cost related to the realm of Actualization, and it should be somehow stated as a law that relates to the realms of physics and philosophy.  And perhaps there should be a term designated for the ongoing energy required to supply intelligence for an intelligent machine.

It Starts With Intelligence

The term intelligence is used differently by different people and professions.  Most people use the term as associated with humans – the ability to think, comprehend, analyze.  Engineers also use the term to indicate the ability to process information, like a computer program.  The military uses the term for information that has military value.  It is important to understand that intelligence in all forms is not a physical entity, it is an ability to process information.  For purposes of this discussion, we need to consider different levels of intelligence and propose the terminology described below.

Information

Information is not really a level of intelligence, rather it is a product of intelligence.  It is static, non-material/energy/space/time abstraction that is the product of an intelligent entity, not science/physics.  However, information can be embedded into matter based on designed languages and/or protocols and/or codes and/or standards.  Examples include the information carried by books, tapes, CD’s, computer memory, handwriting, and DNA. These are examples where information is embedded in matter for the purpose of storing the information for some later use by an intelligent device, such as a computer, a TV set, an audio player, and in the case of DNA, the information needed to run the cell’s life process.  Information of this sort can only be created by an abstract or supreme intelligent entity.

But information can also be embedded in matter to provide function other than to store the information.  It can be manipulated to form art or to be a tool.  Examples are paintings, sculptures, jewelry, hammer, nails, pliers, windows, passive electronics parts, birds nest, and beehive.  Items that fall into this category of actualizations can be actualized by entities that have an abstract or logical level of intelligence as illustrated in Figure 1.

Logical Intelligence

Logical intelligence is the ability to make decisions based on conditions or circumstances.  The simplest form is a binary switch.  An example would be a switch that is off when there is sufficient light and turns on when there is insufficient light.  A logical device must have means to receive an informational input signal, in this case, a light sensor, and an output, in this case, a light bulb and power source. A characteristic of any logical device is that it requires energy to perform the logical function – it takes energy to operate the light sensor and to flip the light switch in this example.

All machines have some sort of embedded logic.  Think of an engine.  The logical functionality is the result of the shape of the crankshaft and camshaft and valves, the timing of the ignition spark all coordinate the component parts to be in the proper position to make the engine work. But it requires an Abstract intelligence to design the engine as shown in Figure 3.

Abstract Intelligence

Abstraction is the ability to think, invent, to deal with ideas.  This capability goes beyond the logical processing of information.  It is a capability that is unique to humans among life on earth.  Even though humans have this capability, it is the opinion of this engineer that we will not be able to duplicate this capability in machines of our design unless we can “reverse engineer” this capability in ourselves.

© 2016 Mike Van Schoiack

Life Is A Process

Life Is A Process

Life is a Process

The video showing the building of a protein molecule resembles a robotic assembly line in a factory.  These are examples of process control system designed to build a product.  The company I founded, Vehicle Monitor Corporation (VMC), designs and builds systems that orchestrate the work that takes place at vehicle assembly line stations.  This work is a process whereby at each workstation, value is added by completing one of the assembly line processes. A typical example is adding a part, testing to verify proper function with no damage, and  measuring and documenting the work done.

The factories that VMC serves are very similar to the protein synthesis illustrated in Steve Meyer’s video from a functionality point of view.  Transcribing the DNA to mRNA and the transport of this mRNA to the ribosome is analogous to the delivery of the custom-specific details to the factory workstation with the arrival of the vehicle.  The translation process that occurs in the ribosome is analogous to the value add that occurs at a factory workstation.  Both need the delivery of right piece parts Just-In-Time,1  Transporting the protein molecule from the ribosome to the chaperone is analogous to a vehicle being moved from one work station to the next.  The folding of the protein in the chaperone is analogous to the vehicle getting its adjustments and tune-up at the last workstation, both leaving out the back door.  

A process is defined as “a systematic series of actions directed to some end”.  Watching this video shows the ribosome in action leaving no doubt that the protein synthesis meets this definition.  As with the factory, the protein synthesis involves work performed at specified places.  The activity at each workstation involves many individual actions. such as, picking a specified piece, placing the piece at a specified location, and bonding it into place.  Each of these actions is purpose driven work, which if not executed properly, will result in failure.  Both examples need the parts to be delivered to each work station, and the product must be transported from one work station to the next.  Both involve specific instructions for each product that comes down the production line. 

Much detail was missing in the video that is obvious to an engineer who has had to make complex systems work.  The nucleotide and amino acid molecules just “fly into place”.  In reality, these molecules must be pre-assembled, staged, delivered at the correct time and position without the expenditure of excess energy that will damage the machines, polymer or surrounding molecules.

The fact that life is a process is not disputed.  My wife’s college circa 1960 biology textbook2 defines life as a”..a process, or rather a series of interacting processes which are always associated with, and take place in, a complex organization of materials.”   What is missing in this definition is the distinction between a natural process and an intelligent process.  This same book has no direct mention of molecular machines, making it obvious that much has been learned in the last 50 years.3

The world of biology at that time recognized that there are molecular machines4 working in the cell performing tasks such as building proteins by the ribosome machine.

“The ribosome is an “RNA machine” that “involves more than 300 proteins and RNAs” to form a complex where messenger RNA is translated into protein, thereby playing a crucial role in protein synthesis in the cell. Craig Venter, a leader in genomics and the Human Genome Project, has called the ribosome “an incredibly beautiful complex entity” which requires a “minimum for the ribosome about 53 proteins and 3 polynucleotides,” leading some evolutionist biologists to fear that it may be irreducibly complex.”5

In addition to processes that occur on an “as needed” basis, like replacing a protein that has fallen to equilibrium and is no longer functional, there are “life cycle” processes  that take place as shown in figure 2.  Processes are typically bottom-up, top-down with layered individual smaller processes that are steps or supporting actions required to accomplish the overall process.

It seems obvious that biology is not just about chemistry, it is also about machines working in life processes.  Life in every cell that has ever existed, in every life form that has ever lived on earth, and all of the interactions among life forms, is what this engineer thinks of as The Life Process. It is a hierarchy of processes over time, within cells and within life forms.

 

© 2016 Mike Van Schoiack

Fake Physics

Fake Physics

Page 552; my sophomore physics class textbook reads: “By means of the statistical definition of entropy, Eq. 25-13, we can give meaning to the entropy of a system in a non-equilibrium state. A non-equilibrium state has a definite entropy because it has a definite degree of disorder. Therefore, the second law can be put on a statistical basis, for the direction in which natural processes take place (toward higher entropy) is determined by the laws of probability (toward a more probable state). From this point of view a violation of the second law, strictly speaking, is not an impossibility. If we waited long enough, for example, we might find water in a pond suddenly freeze over on a hot summer day. Such an occurrence is possible, but the probability of it happening, when computed, turns out to be incredibly small; for this to happen once would take on the average a time of the order of 1010 times the age of the universe. Hence, the second law of thermodynamics shows us the most probable course of events, not the only possible ones. But its area of application is so broad and the chance of nature’s contradicting it is so small that it occupies the distinction of being one of the most useful and general laws in all science.”

That was almost 60 years ago, and I remember thinking at the time that this was false – it didn’t make common sense. But more significant, equation 25-14, discussed the statistical variation of the velocity (think momentum) of the molecules.  The molecules of water in a lake are exchanging momentum by the collisions with each other. By the first law of thermodynamics (conservation of energy), this momentum (temperature of the water) had to be conserved. Therefore the average temperature of the lake would be constant except for the slow heating/cooling due to energy exchange with its environment.  In other words, this statistical improbability is true, looking at just a few molecules but is not valid for the average of all the molecules as a function of time.  So’ the probability is zero, not some crazy low number.

Even if the probability curve represented the instantaneous temperature of the macro object (pond), the changes in temperature would be happening so fast that one would be unaware of them. The lake would still have the same average temperature.  In other words, the freezing events would be so fleeting that one would be unaware of them, and this would be true at both the micro and macro levels.

While in college, common sense told me that this could not be true, and the first law of thermodynamics rational answered the question for me.  Later, I heard of the thought experiment of reversing time looking at just a few molecules; one would not be able to tell if time was going forward or backward.  If you watched a video of a tornado ripping a house apart, then ran it backward, you would know which direction the video was running.  But if you had a video of just a few molecules of some piece of wood in the house, you would not know which direction the video was playing – it would just be some molecules bouncing around.  The point is that at the micro-level, all energy is conserved as far as the observer can tell.  It is at the macro-scale that potential energy waists away.  In other words, one cannot always tell what is happening at the macro-level by looking at the micro-level.  The physics textbook made a statement that is not even true at the micro level, because even if a system of a few molecules instantaneously “froze”, they would be “unfrozen instantaneously as well.

The first and second laws of thermodynamics, taken together provide, at least in this case, a theoretical reason that the macro-reality we experience is not problematic. And that the direction of time (video running backward) is an excellent way to visualize why natural causes reduce specified complexity (increase entropy) as well.

Think about how we experience a pond freezing.  The freezing process starts as heat exchange between the water and its boundaries due to the temperature gradients.  This conduction process occurs at different rates depending upon the details of temperature gradients and the conductivity at various positions at the edges of the lake, eventually causing the first water molecules to freeze.  More molecules freezing to these will occur.  Other freeze points will appear at the borders.  The freezing will expand, meeting other expanding froze areas, eventually freezing the whole surface of the lake, assuming enough time and cold weather.  In all cases, thermal equilibrium dictates when the process stops or reverses.  Bottom line: lake freezing is a natural process that takes time; it is not an event.  If one looks at tiny volumes in the pond that consist of just a few molecules (like 2-10), then yes, there is a probability at some instance in time, one will find the average momentum equivalent to frozen water.  But the boundaries of these tiny volumes are tiny volumes at a higher temperature.  Make the volume bigger and there is a smaller instantaneous ΔT.  Make the volumes the size of a teaspoon, and the instantaneous ΔTs would be probably so small that it would not be measurable for two reasons:

  1. The variation would occur faster than measurement means, probably impossible due to Heisenberg’s uncertainty principle.
  2. It would be a violation of the First Law of thermodynamics – that heat energy is conserved.

Such an event would require the immediate release of a massive amount of energy, like an explosion.  By my calculation, if the pond was the size of Walton’s pond, and the ambient temperature 70 degrees Fahrenheit, the blast would be about ¼ the size of the atomic bomb detonated over Hiroshima.

Treating creations as being the result of an event causes a belief that there is a finite probability of the event occurring.  The truth is different. All-natural macro processes must move toward a reduction of free energy.  To achieve a deterministic raising of free energy requires the use of machines performing specified, deterministic work or mechanisms created by machines that convert energy forms to raise local free energy.

Machines, as I define them, will not function without matter/energy held in an away-from-equilibrium condition.  Therefore they cannot be created by natural causes.  This claim includes machines in life and is falsifiable.  This thought process is the basis for the “Fake Biology” post.

Calling my physics book “fake physics” is going overboard.  I’ve always had high regard for this textbook (two books, Part I & Part II).  I never parted with them because they have been valuable references for my engineering career. I believe this to be an error of completeness of thought, which can result in false conclusions.  There is another possibility – I may be wrong.  If I am, I hope someone will explain why.

© 2016 Mike Van Schoiack

Our Fingers Are Too Big

Our Fingers Are Too Big

The field of electronics deals with devices in the nm range, which is the same size range of proteins.  But there are several differences between ultra-small electronic entities compared to life molecules.  Electronic chips are layered two dimensional, static devices, compared to life molecules that, in the living condition moving around in all three dimensions.  The other big difference is that even though the electronic structures at these dimensions are hard to see, the designs normally supply test points that allow the engineer to discern what is happening at the small dimensions by electrical measurements.  The point is that we do not have the means to track exactly what is happening inside the cell because “our fingers are too large” and we cannot see or probe inside the cell walls to “reverse engineer” the life process.

The assumption is that much of the structure of the molecules in the cell are discerned by scanning electron microscopes, which, I think only work with dead biological material.  This would be very limiting, it would seem.  However there appears to be other devices that claim to have “high-speed imaging, in vivo” capability, a multiphoton laser scanning microscope and more recently a new sub-optical phonon (sound) imaging technique.  An inherent problem is that the objects of interest are as small or smaller than the wavelengths of observation radiation and the energy involved can cause damage.  It is hard to tell how much detail understanding of the process going on inside the cell can be discerned from the tools available today.

However, the bio-medical field is doing amazing things involving manipulating the genome.  This engineer is clueless how these feats are accomplished.  The assumption is that researchers are learning how to use the cellular machinery to achieve their own purposes.  This is a scary prospect as it is obvious that the overall understanding of the process control system in the cell is very limited.  In addition, the lack of knowledge of process control in general leads to the prospect of not appreciating the probable interrelationships between the numerous control loops that must exist in the cell.  This could lead to unintended consequences, some which could be very subtle.

It would seem to this engineer that the best possibility to learn about the process control system in the cell is by indirect means using the techniques of DNA manipulation, correlating changes made to cell operation.  Understanding life, it seems, depends heavily upon gaining an in-depth knowledge of the control system.

© 2016 Mike Van Schoiack

Fake Biology

Fake Biology

Fake Biology

“Fake” Molecular Cell Biology

This is a book recommended by my PhD. biologist friend to learn about the cell’s mechanisms. I’ve learned a lot. One overall impression is the sheer volume of things going on. But I could not believe my eyes when I read the following which is part of the introduction to Chapter 2:

“About 7 percent of the weight of living matter is composed of inorganic ions and small molecules such as nucleotides (the building blocks of DNA and RNA), amino acids (the building blocks of proteins), and sugars.  All these small molecules can be chemically synthesized in the laboratory.  The principal cellular macro-molecules – DNA, RNA and protein – comprise the remainder of living matter.  Like the small molecules found in cells, these very large molecules follow the general rules of chemistry and can be chemically synthesized.”  Then further down the paragraph, it says this:

“The realization that complex processes such as evolution, development of an organism from a fertilized egg, motion, perception, and thought follow the rules of chemistry and physics – and that no vital or supernatural force is involved – has profound philosophical and even political implications.”

The book defines an enzyme as: “a biological macro-molecule that acts as a catalyst” and a catalyst as: “A substance that increases the rate of a chemical reaction without undergoing a permanent change in its structure.  Enzymes are protein catalysts.”

These definitions assume that the work of  the molecular machinery is chemistry, the result of natural causes. There is no recognition of the fact that machine work is fundamentally different from natural work. The chemical synthesized molecules are the result of careful process control, not the result of unguided natural causes.  In addition, the specificity required for at least some the functional proteins is beyond the reach of lab synthesis as far as I can tell from the literature.

In addition, it seems that some variation of the word evolution, or some reference to evolution is on every page. There is no doubt in my mind that part of the “knowledge” this book is conveying is materialist doctrine without supporting evidence.

For these reasons, I feel this book deserves the “Fake” label in the title.

© 2016 Mike Van Schoiack