Abstract or Die:
Life, Artificial Life and (v)organisms
Ron Cottam, Willy Ranson & Roger Vounckx
Abstract
What is “intelligence”?
David Fogel has suggested that it is “the ability of a system to adapt its
behaviour to meet its goals in a range of environments”. We agree with him that
this is the best currently available definition.
The difficulty, however, with this and other “top-down” natural language
definitions, appears when we try to identify the essentials which will enable us
to deconstruct it in the time-honored “reductive” manner and then build an
intelligent system “bottom-up”. What is “ability”? What is “a system”? What is
“behavior”? What are “goals”? What are “environments”? Unfortunately, the
definition does not readily lend itself to reductive linguistic processing, as
the elements of the complete expression are inter-dependent: we cannot establish
definitions of the words ability, system, behaviour, goals and environments in
isolation and then extract the complete expression’s meaning by simply combining
them. Bruce Edmonds has presented a related argument as a tentative definition
of “complexity”, namely that it is “that property of a language expression which
makes it difficult to formulate its overall behavior even when given almost
complete information about its atomic components and their inter-relations."
The situation is no better if we resort to formal language as our descriptive
mode: in fact, it is a good deal worse! Even a humble Boolean AND expression
suffers from this irreversibility: we can derive the single output from multiple
inputs, but not the complete inputs from the output. Science itself is similarly
flawed in its relationship to nature, in its use of rationalities which
presuppose elemental interchangeability (e.g. in the presupposition if A = B + C
that B + C = A).
Nature appears to use a different approach, where the possibility of
contradiction between representations at different localities is provided by
relativity, but even so global coherence is maintained. Local representations
which are at odds with the global “picture” are destroyed in favor of their
local-globally coherent companions: witness the quantum-mechanical collapse of
multiply-superimposed hypothetical representations into one “real” conclusion.
We would do well to take account of the difference between our abstract
formulations, which impose local-global separations, and the fundamentally
different formulations of nature. The present paper examines this difference as
part of a route towards building intelligent systems. We will refer to abstract
logic and natural logic to differentiate between our usual (artificial)
approaches and those taken by nature. The main target of the paper’s
considerations is the meaning of the word “goals” which appears in David Fogel’s
definition.
Our starting point is the observation that computers as we know them do not have
goals ! Their formal style of logic precludes any integration of their data, and
even the individual bits which make up the representation of a single number are
formally separate and devoid of meaning in the global context. Computers as we
know them are not integrated systems on their own, although they usually appear
to be so because we inadvertently include ourselves in their operation! The
first criterion for a computer is that it must be capable of doing nothing -
otherwise how would we know it is doing what we want it to do? Notice that this
removes all autonomy from a computer. Situations where a computer appears
autonomous are simply those times when the (formal) complication of their
operation is too great for us to comprehend in detail and our (formal)
simplified description of their operation is incomplete.
Let us rather look at biologically-derived intelligence as a prototype.
Biological information processing is integrated: note the singularity of our
individual consciousness. However, now we apparently find a contradiction. Our
bodies clearly operate under the constraints of natural logic, but our minds do
not appear to do so. Survival demands that we relate to our surroundings through
simplified representations, as a way of reducing information processing time,
and we consciously do so using abstract logic. But how does this come about?
The situation becomes a little clearer if we look at the development of both
biological organisms and computers from the bottom up.
Newly-born animals have instincts which enable them to survive and learn. These
are built up from conception to birth as a pre-structuring of neural
connections. A high degree of plasticity remains, however, enabling the animal
not only to build on these instincts, but to replace them in many cases with
environmentally-derived variants. Computers also have “instincts”. Their
pre-programming is at an abstract level when they leave the factory, but
initially it lies in the physics of device operation – in natural logic. The
precursor of future “goal” implementation in both cases lies in natural logic.
If both organism and computer start off from natural logic and develop towards
the use of abstract logic, where is the difference between them?
The developmental progression from natural to abstract logic in an organism is
continuous. Although there are strong environmental and external directive
effects, the development itself is wholly internal: an organism “does it
itself”. Natural and abstract logics are integrated within the organism, and
ultimately the abstract can only be divorced from the natural by internal
decision (quasi-autonomously or not). Within a computer there is no integration
of natural and abstract logics: their segregation is the first rule of computer
design, to take decision-making out of the computer’s “hands” and keep it for
ourselves.
So, if we are to build intelligent goal-driven systems, how should we do it? The
current approach of taking a system which by its very nature includes us in its
operation seems rather strange. Artificial life does not exist: it is nothing
other than a functional simulation of life, projected into an
abstractly-operating embodiment by us, the godlike designers. We must learn to
create virtual organisms, whose goals are generated internally through the
interplay of natural logic, abstract logic and environmental influence. But do
we really want to create autonomous (v)organisms, or would we prefer that their
goals are in any case related to our own desires? Nature has developed the
quasi-integrated role of parent to its child. Maybe this is the position we
should seek if we wish to expand our capabilities but still retain control of
them.
______________________________________________
______________________________________________