next up previous
Next: Empirical Premises Up: Towards Distributed Statistical Processing Previous: Towards Distributed Statistical Processing

Introduction

The initial intention is to formulate premises for the realisation of massively scaled processing of empirical data by relating pure data to its model representations. The proposition is of a set of boundary conditions within which such an empirical machine could possibly be constructed.

The most useful area of controlled electronic activity is in the intermediate region between unrestricted transport in metals and completely restricted transport in insulators. Similarly for optical devices the most interesting region is between the two extremes of perfect transmission and reflection. One of the primary techniques used in this work was to look for areas where there is the intuitive application of a polarised idea, for example the obvious choice of Analog OR Digital, and to try and find other possibilities which lie between the two extremes. We require structures where formalised logic is subjugated to more data-based ideas, and this leads in the direction of intuitive computing. A fundamental question is whether useful processing can be carried out without the imposition on empirical data of externally defined models. Not to do this implies a requirement for the automatic generation of hypothetical models or rule structures, and we are now talking about some kind of living computer.

The characteristics of measuring instruments are defined statistically by their own measurements in comparison with those of other instruments. Descriptive models should be evaluated in a similar manner. The classical answer to what is a good book is a set of rules which, if followed, lead to good literature. Unfortunately, these rules only describe books which already exist, and they will change if other universally acclaimed good books appear whose styles violate or extend the set of rules. The usual definition of living things is similar: a set of rules is derived which describes a subset of all things, and these rules are tailored to fit as nearly as possible the set which we already know are alive! These are POST-imposed rules which do not necessarily apply to future circumstances.

A normal sequential computer program is written in advance of being used, but it is only accepted for use after testing. The testing takes the form of running the program within a set of known boundary conditions to see if it gives the correct results. This is analogous to checking if the set of rules for good books holds for an already available set of books. Such a program is therefore itself a set of post-imposed rules, and has no predefined validity in as-yet unmet situations. In the sense that computing system rule-bases are set up by reference to a set of predetermined conditions, the same argument holds here too, and if we include the interchangability of soft and hardware structures, then the computer itself is subject to just the same restriction. The only way to avoid these problems is presumably to work in an environment where the only structures which exist are a function SOLELY of the data itself. This is, needless to say, not an easy task.

It is difficult to drastically improve the performance of a complex interlinked system by removing individual parts and replacing them with new ones, as the remaining externally imposed interrelations will define to a large degree the function of each new component (figure 1).

  
Figure 1: Externally imposed effects on an element of a complex interlinked system

This argument applies not only to hardware, but also to the ideas upon which a computer is founded. A possible solution to this dilemma is to remove simultaneously as many preconceptions about computer structure as possible. However, it then becomes extremely difficult to navigate towards a more effective structure by logical means. In place of using logical paths to arrive at a logically processing computer, an analogous choice is to follow intuitive routes to arrive at intuitive structures. A consequence of adopting this approach is that it is not initially possible to justify resultant conclusions on the basis of logical derivation from previously demonstrated bases.


next up previous
Next: Empirical Premises Up: Towards Distributed Statistical Processing Previous: Towards Distributed Statistical Processing



Nils Langloh
Tue Jun 13 19:58:31 MET DST 1995