Lifelike Robotic Collaboration
Requires
Lifelike Information
Integration
Ron Cottam, Willy Ranson & Roger Vounckx
Abstract
As we all (sic) were taught in school, the mammal eye works
to create an inverted image of the viewed scene at the retina (whose orientation
is rectified by the brain). Not so. There is no integrative capability at the
retina to perform this function. Any “image” is generated much later, in the
various layers and centers of the brain: it only “exists” within the (abstract)
unification of high-level consciousness, and never in any “real” sense
describable by science. If you are viewing this text via a computer screen or
through the printed word, the same constraint holds: it does not exist at all as
a unified entity outside your brain or imagination, merely as a collection of
informational elements devoid of any implicit organization, which was
transmitted through the Internet by a means which has been formally
(scientifically) structured through the application of our imagination to
achieve our aim of reproducing patterns across space and time. The same argument
holds for the entirety of our environment: it is all beyond representation by
(current) science. Not only does this argument apply to “objects”, it applies to
equally well to any and every subject of discussion. Most particularly, in the
current context of interest, we should not expect to find that a robot is
capable of responding as a “black box” to external stimulus on the basis of an
internally integrated “motive”, except where that “motive” is completely
relatable to its formally unified degenerate representation – namely the binary
“it exists” or “it doesn’t”! Such a quasi-hierarchical relationship (along with
any “algorithmic” complexity it exhibits) is both nominally and functionally
trivial when compared to the styles of real complexly-hierarchical operation
which characterize living organisms. We should consequently beware of
attributing anthropomorphic integrative unification to the internal workings of
a “black box” robot unless it is entirely predictable (a character corresponding
exactly to the quasi-hierarchical condition referred to above), in which case
any resemblance of its actions to those of a human is far from likely, to say
the least! So, can we describe and develop robots “in our own image” by the
application of scientific techniques, or not? Or does the problem which must be
addressed reside elsewhere? Descriptions of the natural world and the placing of
robots within it which derive from Evolutionary Natural Semiotics (ENS) by way
of signs are untouched by this dilemma. In the context of ENS, any formalized
representation is derived pragmatically (but less-than-algorithmically) from its
own scale-local grounding, and the various scale-localizations are coupled
through and within the context of a global-to-and-from-local correlation which
mediates between the scale-local groundings of a global grounding which it also
creates. “Reality” (in a scientific reductionist sense) then refers to nothing
more that the lowest level of description which we can be bothered to deal with,
whether that be the atomic level, super-strings, membranes, … The descriptions
which we habitually employ for “systems” which are internally structured in a
network-like manner are suitable if, again, the network structure is amenable to
complete (formal) integration reductio ad adsurdum, but for a “system” which
exhibits “useful” complexity, they are worthlessly simple or simplified. Within
ENS such representations (where we view the “system” as a whole and
simultaneously its network-like internal structure) have the character of
quasi-external representations, whose (cautious) applicability depends primarily
on their degree of representational equilibrium. Much effort is currently being
expended in developing “internalist” models of operational situations, rather
than the “externalist” ones said to be characteristic of scientific endeavor. It
is difficult to imagine, however, how a uniquely internalist representation of a
“conscious” or aware state can or could be useful: its existence would imply not
only the usually-quoted criterion of lack of knowledge of the causes of received
stimuli, but also the complete absence of any attempt to investigate or imagine
the origins of those stimuli. To do so requires the construction of an
(imagined) externalist model of the situation: to not do so seems to imply
lifelessness! Consequently, it makes more sense to describe living interactions
as a negotiation between internalist and externalist representation, through a
process which mirrors the internal-external negotiations which lie at the roots
of human consciousness. Human consciousness is “singular”, in that it only
exists as an individual unified “entity”. It is within the “sufficient
interpretation” and correlation of a multiplicity of informational details that
this text becomes (nothing more… just “becomes” itself) within our
consciousness. Its existence emerges from the process of integrative
interpretation (or interpretive integration, if you prefer). This process of the
emergence of the informal from the formal (simplistically describable as
emergence of the analog from the digital) is the very nature of living entities.
It appears most obviously, but not uniquely, in the generation of analog protein
folding from the digital code of DNA. Science does not merely omit this
emergence from its confines; it expels it, as being too difficult to deal with.
A lifelike nature is by definition external to a scientific development! So, how
are we to develop “lifelike” robots? Ultimately, not uniquely by digital
computation, although this can provide effective interfacing between a central
information processor and the outside world. This is itself the nature of our
own brains: a central really parallel processing style, whose operation is most
closely related to the superposition-and-selection mechanisms of quantum
mechanical interaction, and integration and differentiation of the results of
this processing to serve localized output and input nodes. Currently this style
of integration and differentiation is far beyond our constructional
capabilities, and while a prime target must be to investigate and develop
lifelike information integration, we can nevertheless achieve useful preliminary
results if we couple our targets to the means which are available, so long as we
do not fool ourselves into thinking that this will be sufficient.
______________________________________________
______________________________________________