Evolutionary Partial Comprehension

Ron Cottam, Nils Langloh, Willy Ranson & Roger Vounckx

Abstract

            Survival requires not only that we are capable of reacting to external stimuli, but that we can do so within a realistic time-scale; it is of no advantage to develop perceptional awareness of the beautiful paint-job of an approaching vehicle, if during the development process it squashes us! By their very data-representative nature, models are always simplified representations of received information, and as such are amenable to more rapid processing and hazard-avoidance. The recognition of danger depends not only on an evaluation of the current situation, but also on the prediction of future changes; a vehicle which is approaching us along a narrow winding country lane is not continuously pointed in our direction, but this does not imply that there is a good chance it will miss us even if we stand in the middle of the road.

            An externally conscious or receptive entity is presented with the continuous input of massive quantities of empirical data. In a causal environment exhibiting communication limitation it is impossible to react to this incoming stream within a reasonable time-scale if that necessitates processing all of the available data directly. More usually we would expect a rapid reaction to be based on calling up a previously generated or received rule based on prior processing of earlier collected data, which is deemed to be currently suitable on the basis of superficial evaluation of the existing environment. This may be more or less successful depending on the similarity of the current context to previously encountered ones, and a degree of "fail-safe" character may be built in, but success depends not only on correct identification of a required rule, but principally on the availability of the relevant rule itself. A primary function of computation is therefore the generation of required but as-yet unavailable simplified descriptions of large empirical datasets; but even if the reactional processing has simple relevant rules available, the first criterion is whether to react to the current context or not.

            At the other end of the time-scale from that involved in the development of rapid hazard-reactions is the requirement for non-immediate but accurate representation of detailed aspects of the environment, which conversely necessitates dealing with as much of the incoming data-stream as is practicable. This can be likened to computer background processing, which can take place during periods of reduced real-time reactive activity so as not to compromise the accessibility of rapid-response rules. The major problem associated with data-destructive procedures is that of re-evaluating relationships which have already been developed between different parts of a dataset when more data arrives or the context changes even marginally. The primary criterion here is for processing which is not only reversible by nature, corresponding to a data-conservative requirement, but also completely distributed over the global dataset, which to allow for all possible interactions implies independence of the time domain, or instantaneous inter-data communication.

            These constraints lead us towards an updated reasoning on the nature and function of "neuronic" computation systems, particularly when integrated with the characteristics of transmission lines encountered in the development of ultra-compact microelectronic circuits and the results of research into the operation of synaptic messengers. The customary implementation of neural nets exhibits characteristics which differ from this reasoning in a number of significant ways, while the evaluation of mammalian distributed feature-recognition supports the application of a hierarchical approach based on the combination of data-destructive and conservative processes.

______________________________________________
______________________________________________