On Organismic Computation
This is introductory, so I skip over a lot of very interesting questions, which I have every intention of covering in detail later.
Here I wish to introduce a new theory of computation, which relies heavily on biological and cybernetic intuitions. It in no way erases Turing theory (theory of Turing machines), but offers another branch from the same tree. Right now, its connection with Turing theory is not completely clear. It is not a priori contradictory to suppose that some living organism may have computational power equivalent to a Turing machine. It may be that a framework will be useful which distinguishes biological information from classical information. In that case, one could just as well have classical Turing machines as quantum and biological Turing machines. This piece already hints at how such a concept might be used.
However, it should seem obvious that in most cases, living things are nothing like the mathematical objects laid out by Turing. First, they cannot expand themselves arbitrarily in the aid of a single computation; they are subject to economy. Also, as one observes animals continously navigating the Earth through time, discrete "input" and "output" phenomena are not present except perhaps in social animals. Indeed, to the animal itself, the only thing distinguishing "input" from "output" appears to be intentionality. The traditional account is that one passively recieves input, while one actively and rationally achieves output.
Allowing that some actions and behaviors are not entirely intended or "enacted", the picture becomes muddier. Also, any tangible output is also part of the input; feeling muscles exert is a result of exerting them. Hence it is only by an animal's own distinction between "inside" and "out" that it seems coherent to think of oneself as a computational device at all, if one's intuition of "computation" is guided by Turing theory. (Don't take me to mean that only animals "compute" - they are only the most viscerally obvious example).
Presumably, this border between organism and environment is only explicitly given to the organism itself, i.e., it perceives the border more clearly than any other. If "input" and "output" depend on the situation of this border, it seems to me then that no theory which gives only a first-person (phenomenological) account of "input" and "output" can distinguish individual organisms' computations from cooperative computations undertaken by multiple individuals. That is, to each organism, it would symmetrically seem that the other is just a part of the environment feeding it input. The communicating parts should be in some sense "aware" of one another if they are to truly cooperate.
So if we're gonna make this work, we need to expand the framework.
The first step I'd like to take is following David Deutsch in conceiving of computations as consisting of arbitrary physical processes/motions. The paradigm is this: One prepares the computational apparatus, a physical system, in a state (along with input), and allows it to come to "rest," i.e. achieve some stationarity. If one can establish a correspondence between the set of input states with the domain of some function F, and if the "final state" resulting from each input corresponds to F(input), the apparatus can be said to compute F.
Quick remark. The Church-Turing thesis in this setting becomes "Every physical process can be simulated by some other physical process," which is quite a thought provoking idea, but a bit to the side of my goal here.
Physical automatons can be defined on this basis. At the very least, if a physical "theory of everything" is mathematically possible, we suspect their "transition functions" could be given by the theory. Theories we have now illuminate this. If one had system that could be relied upon to behave classically, its evolution would be given by the equations of motion. A "pure" quantum system would evolve according to its "rules," whether given by Schrodinger time evolution or something else [e.g. in theories of quantum thermodynamics in which entropy increase is the "rule"]
Here metaphysics enters the picture a little too strongly to ignore entirely. Idealists may find the rest contentious, but materialists won't find cause to object. I treat the mind as if it consisted solely of physical processes/motions. An idealist theory is probably possible, and would interest me greatly. Perhaps a dualist theory would have elements of each.
Leaving philosophy behind, a physical Turing machine is achieved by simply expanding the above automaton to include expandable storage. Lord knows how you'd build one. But it's easy to define and imagine. In fact, the chief engineering difficulty would be getting to run itself; if it has human help, almost any place in the universe can be turned into a Turing machine: just write on it! However, my claim is not that any organism is a physical Turing machine, i.e. that it's life constitutes one, but I do take certain similarities as my starting point for a theory of organismic computation.
A finite automaton's evolution, or the output of a Turing machine, can be computed by anyone provided a complete set of transition functions, which determine the "motion" of the machine. It doesn't matter if the word "motion" is metaphysically precise, only that the system presents itself as having changed. Quantum theory doesn't allow us to ascribe to arbitrarily precise "states" to a system in terms of observables quantities, i.e. points in a Euclidean phase space, but such descriptions often suffice for very large systems. Taking that into account, I prefer to use "motion" in a way that remains vague, to describe qualities of observed systems rather denoting the use of any particular mechanical theory.
Now we're ready. The basic idea is that any repeatable motion can be, in principle, used to help decide something. Motions need not be reversible - they must be reproducible in the same sense as a scientific experiment or DNA sequence. Given a decent knowledge of possible motions, all that's needed to compute is some way of acting on whatever it is that's moving, in order to "start" it from a desired position.
Our seed will be intuition about "performing actions." Expanding my own intuition, I find a few key characteristics. Of course, this is a potential branching point from which alternate theories might grow.
0) Actions are possible motions of a whole. This to say they are physical processes thought of as being "contained" in a name-able place; spatially bounded. Call this t
1) Actions can be referred to by names, whether or not they can be related in detail through writing or not. On this basis we can have say we have at our command a structured collection of actions. (Which can be studied mathematically, presumably). If one were navigating a written flow chart while making a decision, certain actions are obviously possible: Go back; Go back and change answer; Reset. We can imagine it working the same if the person had only a mental image of the diagram, rather than a written copy.
2) Actions modify the present in its entirety. Specifically, they may even change knowledge of themselves. One can forget what one just did to get here, even departing from the original idea altogether. This has the consequence that actions need not be functionally reversible even if one has complete control of the system, regardless of thermodynamic considerations. The obvious example is erasing a white board that contained necessary information that no one present remembers.
3) Actions work analogously to mathematical functions. They transform what is present, and would allow for a reproduced state to transform the same way any number of times. I'll speak on this below.
A word is necessary on the characteristics of biological "states." I'd argue that organisms are necessarily uncertain of the configuration/motion of the material which constitutes them. It's conceivable that some microscopic part could be re-arranged independent of all sensory apparatus as well as of all mental computations. Either "bodily state" could correspond to the same quality of existence. This recalls an idea from classical thermodynamics, a principle that many microstates may correspond to the same macrostates.
It'd seem that tiny bits nevertheless add up to something. Just by experiencing the moment and performing measurements on the organism, it is impossible to detail its constitution in terms of imperceptible parts. (I could conjecture an information theoretic theorem applicable to arbitrary systems which can be considered "aware," or even "organized," but I'll leave that for another time.)
This uncertainty has the consequence that we're never sure exactly what we're acting on, so that even though we suppose actions work essentially by transitioning where we're at into where we're going, that principle isn't enough to control yourself with arbitrary accuracy, since you can't know your departure or destination precisely. I've made no mention of quantum uncertainty.
Even though actions can in theory be symbolized, I'm assuming they are known qualitatively. So applying the last paragraph, of course we're never sure exactly "which" action we're performing; microscopic prescription eludes us as well as description. So we never where we are or what we should be doing, but if the phenomena of my "actions" are in any way tied to mechanics (and I think they must be), we can assume (hope?) that some mechanical theory could give a more precise account of a particular "action" (whether or not know that theory now!).
So I think it's safe to assume that actions "do something," whether or not we know exactly what it is. That something may not be deterministic in the sense that an action produces the same outcome from a given "input" every time, but we can be sure to find a pattern in the outcomes. (Even if the pattern appears "random" and unordered, its disorder can be described.)
Recalling that mathematical functions serve to abstractly link between two structures (domain and range), it seems that actions, at least at a theoretical level, play the same role as a function on the "state space" of the oragnism's computational workspace.
The epistemology of actions and their "mathematicity" should definitely be explored. But not now. Onward...!
Now you can see that actions can be allowed to play the role of acceptable input symbols to an automaton or Turing machine. A family of transition functions d(s,A) could equivalently describe all possible motions. The alphabet of actions represents abilities and limitations for computation.
Note that "what is acted on" also influences what can be computed. Suppose you have a large supply of pencils and paper. Now suppose instead you have only a beach of sand to write on with a stick, which can be counted on being erased by the waves "forever." Whatever could be computed by someone (e.g. solving a mathematical problem according to a precise theory) with the aid of these slates, the consistent loss of information on the beach would surely have some effect. On the other hand, if the paper or pencils run out, and one has no means to acquire more, there is an upper bound on how much info can be processed with it (assuming a minimum readable symbol size). It is not clear whether either has "greater" power a priori (remember, there is no general hierarchy of computational devices in terms of power). Locally, a computer sees no difference between the slates (factoring out bodily mechanics!); writing out a few lines of an equation for instance, before any waves come or supplies run out.
I'd rather express the above axioms more clearly, specifically in reference to the body of an organism. Its living state is referred to as well.
1) Actions operate on what is present, to produce patterns of behavior. Actions on a bodily state are repeatable if the bodily state can be reproduced.
2) If the body were to return to a previous state and perform a known action repeatedly, a pattern would emerge in the resulting processions of states. I'm using the word "pattern" in a broad enough way that even "no pattern" or utter randomness counts. If it can be perceived, if it can be known, it can be studied and described, at the very least in one's own internal language. [Actions are "mathematic" in a broad sense]
3) This pattern is the basis for symbolizing an action by some letter, e.g. "A," or by words deemed appropriate (such as the Hamiltonian operator in quantum mechanics). Likewise, the ability to recognize states of self justifies symbolizing them. We can thus make formal statements like A(s = pattern. Here "A" acts on (s, an organismic state.
Formalism can at times be useful, and I propose it only as a potential tool. We can consider the algebraic properties as well as spelling and grammar of various alphabets.
A(B(s). This is an action applied to a pattern. We should interpret it as a pattern of patterns: The action A applied to any state which follows the pattern B(s). Concatenating actions with complex behavior seems to exponentially increase the trajectories one must consider. Seems like fertile ground for mathematics. Beautiful harmonies may lie within even simple alphabets of actions.
For instance, any cycle would surely be of great interest, if its factors varied widely and unpredictably in their behavior: A(B(C(D(E(F(s = F(s. We can imagine information being created and destroyed arbitrarily, yet in some sense still being "preserved" by virtue of the structure of the systems behaviors. Is that crazy? Of course, this apparent preservation could be considered a coincidence. However, it does suggest a scheme in which actions for keeping a specific piece of information can be encoded concisely, if indeed such cycles happen at all.
The "=" identifies results, not processes, so that the above equation cannot be interpreted to mean that doing F feels the same as doing F, then E then D and so on, just that doing those would result in the same long term pattern.
Another interesting possibility is that the action one performs changes imperceptibly. After a long enough time the result may be perceptible (like a big swimming pool with a slow leak.) Letting A^n = B, supposing the drift to be uniform, we can go back and differentiate the results of the various A^k's (provided we made some decent record of the throughput during the drift).
The formalism allows a distinction to be drawn between actions which historically led to a similar outcome, and compositions of historical parts into historical wholes and decompositions of wholes into parts:
A(s = B(s for any particular s is distinguished from A(x = B(x for every possible input x. As you can see, this expresses neatly something that is awkward with ordinary English, so at the very least the formalism is efficient. In particular, a decomposition such as A(x = C(B(x that holds for every x in some "problem space" denotes knowledge of how to stop and resume A. It would suffice to record B(x, then to reproduce this state later and enact C.
This gives us a nice way to think of actions as being wholes consituted of parts. It's hard to say how fine we could in principle refine the resolution of a sequence... it seems this would serve as a measure of how "automatic" the action feels. If an action can't be decomposed, its inner workings must remain entirely mysterious. One simply does it. For me, adding 1 to a number is a bit like this. That automatic-ness can be taken as an axiom, as in Peano arithmetic.
Compared to machine-like computations, the most glaring characteristic of living computations is that they don't start and stop discretely. Life consists of unifed, continuous operation. The closest analog to "halting" is an apparent lack of motion (as well as some idea that the system won't spontaneously leap into motion). One must decide what to make of the throughput as one goes.
Organismic computation may depend on a living, thinking, feeling subject to "run," so long as its aware of itself. This means that we can "call" functions involving intuition, preference, quality. Any intuition an animal has can in principle be relied on to influence a decision, whether or not it can be codified as a written rule or simulated on a deterministic machine. This should be developed much more.
"Programs" on such devices therefore can be written entirely in "pseudo-code," translated according to the organism's ability. Any confusion about how a program should run is somewhat analogous to an error in compiling.
The success of spoken language for teaching manufacturing and engineering processes to one another is an example of this. Theories of arts and music are also an example. "Play a low G," (in the context of some understood instrument) serves as a pseudo-code whose actual "instructions" would be ultimately carried out physiologically. You don't need to go over and physically move a person's body into position to conduct an orchestra!
I want to emphasize again that I'm not claiming anything about the totality of an organism's processes. It is possible to think many phenomena of the mind this way, and it seems to me powerful but in no way "ultimate."
Thus far, the narrative of a living computation is this:
Symbols or simulated objects are prepared in a mental context or actual or simulated environment. One need only some momentary evolution rule to "let the system fall" (flow) where it may. If one is trying to make an important decision, likely one has some idea of what acceptable or non-acceptable results look like. Settling into one of these categories would have a clear implication for the decision.
The evolution may dictate that there is no "halting," i.e., no decision is reached. Or the device may halt, but it's answer may be indistinct or ambiguous. A key feature of living computation is that an organism decides when to cease a computation.
Whatever the result, it is witnessed as living information. The quality of this info can be interpreted however is useful. For example, to work with a classical Turing machine, one would naturally translate the qualities and symbols apparent in the mind into bodily actions to influence the machine, and likewise the machine's behavior to influence further mental computations.
Thinking of an organism as a computational device ---
In the literature of physics, devices appear as abstract objects which "do things" at a theoretical level. One need not specify how to instantiate them in order to employ them in diagrams which indicate function. The theory of electronic circuits is a great example of this. It is also used in daily life. For writers, a pen is something that lays ink on a surface. Beyond that, the specifics of its operation need not concern you (unless you want to repair it or build a new one!)
Let's just say a computational device is any physical system that performs computations. Then if an organism can set up by itself, any automaton, the organism may operate on it by a set of rules to perform computations, and is naturally a computational device.
You and I have many routines that we use to help make decisions involving various "inputs." Is the way blocked? But it's not clear if these are mathematical functions like those computable by Turing machines. We call a machine a "computer" because we suppose it correctly computes known values. But usually we're trying to find out something we don't know.
Just saying this, I've left it as a matter of conjecture whether an organism's mind can be described solely in terms of computations (defined as they are here). Whether or not that's true, thinking organisms can employ computations in their lives, which is obviously the case for humans.
This is why I'm not sure it's appropriate to call an organism a "computer" or "machine" of any sort. It can compute functions, but it's not obvious whether everything it does can be considered a computation. If a computation "happens", but it didn't involve anyone "reading" or "writing," is anything performing a computation? I think it's a bit like asking about falling trees in deserted forests. It'd seem awareness makes some difference; natural processes become signs in the light of consciousness, and only in this light.
However, just about any well understood biomechanical aspect of an organism could in principle be used to encode, communicate, or compute something. I refer you to the literature on DNA computing. So it's also not insane to think perhaps organisms can be considered holistically as computers. If that's so, what is it that they compute?
Biological "state" and information must be explored in far greater detail. If we can consider a topology on all possible (and known/remembered) states, then organismic computations are almost like topological automata. Maybe this suggests a useful framework. It allows us to talk about how actions interact with the intrinsic structure of the "state space." My first idea for a topology is to define a closed set surrounding some (s as the of the region of states which are accessible from (s given some resource limitation. Considering all possible self-imposed resource limitations up until the limit of the actual pool, we generate the topology of states. If we allow time to be a resource as in traditional computer science, we get a time topology. Time is a bit odd, though. As in quantum mechanics, we shouldn't expect it to behave exactly like energy. Much more later.
It is fascinating to note that we can count using physical representers (such as stones), or entirely within the mind using symbolic means, or indeed with electronic systems like the one you have before you. Thus many computations seem to (spatially) exceed the organisms that use them. Cybernetic aspects of computation, as well as fundamental aspects of cooperative computation will be explored in my next essay.