The evolution of information in man and machine

The PhD-thesis ‘The informatical worldview, an inquiry into the methodology of computer science’ (Delft University of Technology, 1996) deals with the coherence between human and machine information processing. Below is a summary of the theory. A popularized Dutch version was published by Kluwer in 1998 with the title ‘De programmeur en de kangoeroedans’ (‘The programmer and the kangooroo dance’) with a chapter distributed separately as a gift under the title ‘De geprogrammeerde graal’ (‘The programmed grail’).

Drie boeken

1 Introduction

These are the words of the Bhagavadgita, one of India’s most sacred books, an integral part of the majestic epic Mahabharata: ‘When one sees Eternity in things that pass away and Infinity in finite things, then one has pure knowledge. But if one merely sees the diversity of things, with their divisions and limitations, then one has impure knowledge. And if one selfishly sees a thing as if it were everything, independent of the One and the many, then one is in the darkness of ignorance.’

Thus an unknown scholar, ca 500 BC, summed up the way in which he thought entities of information should be connected to form an ideal constellation. Obviously, he prefers the oriental form of uniting paradoxes, dismissing the western view (at that point in time only just gaining a stronghold in Persia) as ‘impure’ and the primitive one as ‘ignorance’. Apparently it is very attractive and perhaps even natural to distinguish three methods of dealing with information. The author of the Bhagavadgita liked it, western anthropologists liked it, and, indeed, even computer scientists liked it. The odd point is that this division has never been lifted to the abstract level of information, where only the formal relation between entities of information taken into account, respectless of their specific content or meaning.

That this subject would be taken up by computer scientists is no coincidence, for only in the computer era information itself has become a subject of study, whereas to that point information was mainly a vehicle for describing studies in other fields, notwithstanding some very fundamental philosophical discussions relating to the nature of for instance language or mathematics. The latter, however, concentrated on specific appearances of information, not on the subject itself.

The idea of a unifying theory of information sprang up right after the second world war, when the aftermath of Auschwitz and Hiroshima led to a worldwide urge to unify, integrate and, especially, ‘never let it happen again’. Under these circumstances the American mathematician Norbert Wiener launched his ‘cybernetics’ programme, in which a group of computer scientists, mathematicians and psychologists (and even prominent anthropologist Margaret Mead) set out to understand human information exchange in its broadest sense. The actual research, however, centered on quantitave aspects, leading the field to narrow down to control theory eventually. Ever since most theories of information concentrate on the quantitave path, leading to interesting results at times, but just as often failing to address the broad range of subjects they promise. It is not unfair to say that this line has not delivered very satisfying, let alone broadly applicable results so far.

This paper proposes to follow the cultural path, that is to follow the memes rather than the genes. Beginning in prehistoric times human attitudes towards information will be traced, showing how all attitudes have accumulated in modern man in the triplet of language, mathematics and physics. From this starting point the influence of the triplet on computer science is discussed, demonstrating how philosophical discussions in computer science relate to those in other branches of science.

The paper is organized as follows. In sections 2 through 4 the three notions of instructionism, inventionism and adaptivism are introduced, starting with their cultural roots and leading to their contemporal appearance in language, mathematics and physics. After a short interlude (5), sections 6, 7 and 8 investigate how the triplet has found its way into computer science. Section 9, finally, draws some conclusions. Given the broadness of the subjects covered, all of them are necessarily dealt with in utmost brevity. More extensive coverage is provided elsewhere.

2 Human instructionism

The paleolithic caves of Lascaux are covered with paintings of animals and hunters. It is assumed that to ancient man these paintings were no mere paintings, that is representations of objects, but that these paintings were actually identified with the subjects they represent. A drawn rhinoceros could actually be hunted and killed. This idea of identification is supported by practices in contemporal primitive societies, where people can actually be terrified at the sight of a maks representing an evil spirit, even though they know all too well who is behind the mask. Shamans may also be very much impressed by feits of other shamans, even if they know the trick behind for instance the magic disappearance of an object.

Clearly, this is mainly a psychological feature, the ability to become enchanted by one’s own imagination, regardless of logic and reality, such as modern western humans may at most experience at cinema’s, when strong stimuli awaken old instincts. The Dutch historician Johan Huizinga has suggested that, for an understanding of this mental quality, it might be wrong to distinguish the believed from the feigned. ‘By maintaining to regard the whole atmosphere of so called primitive culture as a playful atmosphere’, he argues, ’the possibility is opened of a much more direct and general understanding of its nature than by a shrewd psychological or sociological analysis.’

Huizinga argues that the state of mind of primitive man must be regarded a stronger version of what western adults still cherish as ‘play’. In other words, Huizinga suggests that primitive man knows very well that the things he is experiencing in some ritual are unreal, but that he is able to enter a state of compulsive identification that makes him forget reality. And the key to this state of pretension is the mental capability to forget other relevant information, to put aside circumstances, turning ‘play’ into something far more serious.

Little children, too, have this capacity. In a now famous experiment, repeated and varied upon many times, pioneering psychologist Jean Piaget had two identical glasses filled with an equal amount of beads. Then the contents of one glass were poured into a wider glass, so the level dropped. A four year old, asked which of the two now contained more beads, answered it was the wide glass. The boy, who had filled the glasses himself by putting in turn one bead in one glass and another in the other, admitted that he knew the amounts to be equal before and that nothing was spilt. He even acknowledged that if the beads were poured back into their original glass, there would be the same levels again. But now, at this very moment, the wide glass appeared to have more content. Hence it actually had more content.

This is the core of inventionism. All pieces of information are taken as they are, completely unattached to other pieces of information. Everything is true in its own right. Coherence is simply no issue, neither is consistency, let alone mathematical logic. A voodoo-priest may kill a person by sticking needles in a doll and yet show no surprise if the subject walks in alive and well the next minute. Those are two completely independent facts – which is difficult to see for persons that have been indoctrinated by western logic all their lives.

Western society owes an important inheritance to primitive times. Language not only originates in this period, it also bears all characteristics of an instructionist tool for the representation of information. In the bible book of Genesis it says: ‘And God said, Let there be light: and there was light.’ This gives the word a mythical power comparable to the drawings in the Lascaux caves, extended to humans a few paragraphs later in the book, as Adam gains control over the animals by naming them. Once again, it is not relevant whether it is actually possible to create or control by speech, the point is that people are able to imagine it could be, and may temporarily set aside the alternatives. In the course of time (western) man’s belief in the power of instructionism has narrowed down, but it would be a mistake to think that this logic-defying way of reasoning plays no role in contemporary science. On the contrary, it still is one of its driving forces. This paper, for instance, holds that we can gain control over a complex philosophical issue by giving names to abstract representations of informational phenomena. The analysis may be shrewd, respecting the laws of logic, but as with most, especially fundamental research there is also an ancient echo of instructionism at work: I say, let there be light: and there is light.

Not only in its use but in its very structure language bears the characteristics of instructionism. All human language, or, if one wishes, Noam Chomsky’s universal grammar, is about defining correct pieces of information through their form, not through their meaning. Of course, languages have semantics next to a syntax, but these semantics do not exclude certain utterances because of their meaning. Put simply, the sentence ‘pigs can fly’ is a fine sentence, endorsed by syntax and explained by semantics, even though it is nonsense. Language, as a tool for the representation of information, is not equipped to judge this. One might call this a flaw, but it also is its strength, because it assures a substantial flexibility and expressiveness.

So, the price to pay is ambiguity, the absence of certainty whether an utterance makes sense. Ludwig Wittgenstein was one of the first to put language, as a tool, to the test of mathematical logic. His central question was whether it would be possible for natural language to be extended in such a way that the flaws mentioned could be banished. In his own words: ‘Things that are to be said at all, can be said clearly; and about unspeakable things one should keep silence.’ In its sternest form Wittgenstein’s philosophy holds that nothing can be said unless one is sure it is an unambiguous representation of facts. Consequently, it is unsure whether anything can be said at all. Anyway, Wittgenstein holds, there certainly is a class of unspeakable things, which he calls mysticism – the point being that language can never be that perfect, unambiguous carrier of information. This could be called Wittgenstein’s theorem.

A curious parallel between Wittgenstein and Huizinga is that both refer to the concept of ‘game’ to explain a certain state of mind, where it is not the logic of circumstances that drives human behaviour, but the desire to shape these circumstances after one’s own will. It would go to far here to dive deeply into psychology, but it should be clear that instructionism, with all its inconsistency, is deeply embedded in any human mind.

3 Human inventionism

Obviously, the primitive world view is a chaotic one, focussed on single moments with hardly any regard for the long term. This has proved to be unsatisfactory. In oriental culture the stress is on structure, be it not necessarily a logical structure in the modern sense of the word. This structure could be called ‘cosmic order’, the idea that all things in space and time are interrelated. If one understands one single part of the cosmos, one essentially understands it all, because everything is connected to everything. Gaining knowledge, that is forming a coherent body of information, hence becomes a process of uncovering, building on existing knowledge in order to extend it. Inventionism is the idea that all instances of information are interrelated and that once the relation between a new instance and the existing ones is established, this relation is fixed and cannot be changed anymore without endangering the whole structure.

Once again, too much psychology is beyond the current scope, but one suggestion, by Dutch psychologist Piet Vroon, is too compelling not to mention. According to Vroon the discovery of structure should be linked to the development of human self-consciousness: the sharper the image a human has of himself, the more he is able to view his environment as an integrated whole. Vroon points out, among others, that the earliest Greek texts show confusion on this point. With Homer bodies seem to be collections of parts without a central coordinating unit, a soul if one wishes. Also, as the Greeks became more conscious of their personal identities, the oracle of Dodona, where Zeus spoke through holy oaks, disappeared. Apparently the Greeks lost their ability to hear voices in the wind and turned to the oracle of Delphi, where a drugged priestess would catch words of the gods. The silence of the oaks thus marked the transition of the Greeks, then a primitive people, towards oriental thinking.

The unique thing about the Greeks is that they have relatively well documented their transition. At the time they migrated to the Aegean, supposedly from southern Russia somewhere around 2000 BC, oriental thinking was already firmly established. The cosmic order was discovered in Mesopotamia approximately two millennia earlier, giving rise to the first great civilizations, the cradle of such innovations as agriculture, organized religion and writing. All these civilization share an ideology that pivots around the cosmic order, known as ‘me’ in Sumer, ‘maat’ in Egypt, ‘dharma’ in India and ’tao’ in China. The destiny of man is to live in harmony with this cosmic order. This idea not only encouraged man to bring abundant and bloody sacrifices to the mother goddess and her lover/son, but also to develop astrology in order to penetrate the cosmic order. Astronomy was introduced as a tool for this astrology. It lead the ancient Sumerians to believe that some numbers (2, 5, 7, 12, 60) are more holy than others, giving amongst others a system for the administration of time that is still in use today.

The idea of numerology was taken up by one of the four philosophers who were to systemize oriental thinking. Pythagoras was to include music in his theory of harmony, whereas the other three, Buddha, Confucius and Lao-Tse, stressed different aspects of the cosmic order. Nevertheless, all of them built on inventionism. The eternal, immanent cosmic order implies that there is a fixed body of true information, making the system axiomatic in nature. This makes paradoxes, apparently true pieces of information that do not fit in the system, a veritable threat to the whole building. In Zen buddhism, therefore, one of the main aims is to conquer paradoxes by reaching a state of mind in which the contradiction is no longer seen. The same is true for other oriental systems of thought, be it often more subtle. If a piece of information does not fit into the system it will be discarded or forcibly fitted. This rather than revising the system (which does not mean that oriental systems of thought are never revised, but that revisions cannot themselves be part of the system).

Mathematics, unsurprisingly, originates in Sumer. It is a system for the representation of information that is built on inventionism. For many centuries Euclid’s ‘Elements’ has been the example after which mathematics modeled itself. The book gives five definitions, five axioms and five rules that cannot be explained, only understood intuitively. Everything else follows from these fifteen starting points. Anything that does not follow from it, cannot be held true. A formula may not have been explicitly known before it is constructed, but implicitly it has always been encompassed in the system and it is from this fact that it derives its authority. Reasoning always follows the path from the existing towards the new.

Mathematics, the commonplace reads, is a form of language. This may be true, but it is, in a sense, a very poor language, because it excludes many statements that are possible in natural language. Of course, this poorness is at the same time its strength. ‘Two and two makes five’ is not a valid statement in the language of mathematics, because, other than in natural language, mathematics demands that each single statements conforms to the system as a whole. This limits the possibilities but enhances the internal structure, just as inventionism stipulates.

One, in the view of the above, very significant episode in the history of mathematics is the discovery of non-Euclidian mathematics (four times before it caught on) and the events that followed. The fear of paradoxes, innate in inventionism, sprung up vividly and lead to a whole group of mathematicians, notably Bertrand Russell and Alfred North Whitehead, wishing to redefine mathematics in such a way that the absence of paradoxes in the system could be guaranteed. The enthousiasm about the Russell-Whitehead programme even went so far a group of philosophers, known as the logical-positivists, thought it would be a good idea to extend the striving for completeness and consistency to natural language. This would, for instance, mean that sentences like ‘pigs can fly’ had to become impossible. Curiously enough this striving drew much inspiration from Wittgenstein, who had unequivocally rejected the idea in his statement that unspeakable things do exist.

It took a stronger proof to show mathematics that it, too, is not the ultimate flawless method for the representation of information. Kurt Gödel’s theorem is the mathematical equivalent of Wittgenstein’s theorem (more precisely, Gödel’s theorem is a special case of theorem 6.522 from Wittgenstein’s Tractatus). Gödel showed that no axiomatical system can be both complete and consistent. In every mathematical system there are true statements the truth of which cannot be proved by means available within the system. In mathematics, as well, there are unspeakable things. This was a saddening experience, of course, but it did not destroy the confidence of mathematicians, just like the discovery of rational numbers disturbed the Pythagoreans heavily without shattering their faith altogether. They had to live with the paradox – and they did.

4 Human adaptivism

The existence of the cosmic order is not doubted in western thought. What is questioned is the degree to which a human being can understand the cosmic order. Buddha could obtain the ultimate knowledge sitting under the boddhi tree, but Adam and Eve were evicted from paradise and its tree of knowledge, whereas in Plato’s cave the tree can only cast a shadow on the wall. The knowledge of the west is impure, as the Bhagavadgita says, but it is so out of conviction, because it thinks that ultimate knowledge or enlightenment is an illusion, a goal that must be striven for, but that may well be unobtainable.

Historically, western thought has developed through two lines, one philosophical and one ethical. The latter emerged first, in the teaching of the Persian prophet Zarathustra (a.k.a. Zoroaster, somewhere between 1200 and 500 BC, if he was a single person at all and not the personification of a tradition). In Zarathustra’s view the world, at its creation, was corrupted by the evil spirit Angra Mainyu, the enemy of the god of truth, Ahura Mazda, who will destroy him at the end of time. In this ethical view, separation is a key word. God creates the world by separating light and darkness, land and sea. Heaven and hell are different places. In eastern mythology, on the other hand, the world is usually created as a whole and there is only one underworld. Also, time is cyclic in nature, while in the western view it is linear.

Writes mythologist Joseph Campbell: ‘In the Far East, as well as in India, whether in the mythic fields of Shinto, Taoism, and Confucianism, or in the Mahayana, the world was not to be reformed, but only known, revered, and its laws obeyed. Personal and social disorder stemmed from departure from those cosmic laws, and reform could be achieved only by a return to the unchanging root. In Zoroaster’s new mythic view, on the other hand, the world, as it was, was corrupt – not by nature but by accident – and to be reformed by human action. Wisdom, virtue, and truth lay, therefore, in engagement, not in disengagement. And the crucial line of decision between ultimate being and non-being was ethical.’
The ethical line, represented in judaism, christianity and islam, has been predominantly religious in nature until the emergence of marxism. The philosophical line has been human-centered from te very beginning. The origin is, of course, in Greece, after the primitive and oriental stages reflected in its myths. Plato introduced the cave and his own variation on the creation myth. According to Plato humans initially were two-headed, four-armed and four-legged until Zeus split them. Ever since each human, guided by love, is looking for his other half. The significance of this is evident. Like in the ethical view, the world is corrupted and humans must take action to regain it, returning it to its ideal form. The difference is that in philosophy the driving force of the reform is not an intangible god but human experience itself. While Plato still stressed the illusory nature of human experience, his student Aristotle, the godfather of empirical science, pushed it forward as the best way to attain knowledge. Truth became a preoccupation of the senses.

This, then, is adaptivism: the idea that all pieces of information are interrelated, but that it is difficult to find this relation and that it is therefore possible to continuously rearrange all these pieces as deemed necessary in order to approach the ultimate knowledge. Hence, a dynamic process of review, constant reflection on knowledge gained in the past, becomes an integral part of the informational model.

From the time Greeks and Persians battled over (western) world dominance, there has been tension between the ethical and philosophical lines, just as there have been treaties. The crucial divide in ethics is good/evil while philosophy sticks to true/false, two couples that are easily held congruent or at least complementary, as Thomas Aquinas postulated in the period when modern science was emerging, thankfully using the stern logical methods of reasoning developed in mediaeval theology. After all, the divide might be different, the underlying adaptivist model of how to acquire and arrange information is the same.

It is not necessary here to trace physics, the queen of empirical science, from Galilei to the quantum era. Other than in the case of language and mathematics its relation to the cultural substrate is acknowledged widely enough. Physics is a proces of investigating and reinvestigating, of separating and classifying, and in the meantime reshaping the existing body of knowledge following the true/false divide as reported by the senses and their artificial extensions. The actual tool is a combination of language and mathematics to which certain restrictions apply, the exact nature of which is of less relevance here. Of more interest are a few observations that link physics to other methods of information representation.

First of all there is the ‘eastern’ heritage, most obviously present in the way the quantum paradox is dealt with: a Zen-like approach, simply trying to mentally surpass the contradiction. However, this is only a superficial element. More profound is the widespread belief in the existence of the cosmic order – after all, the psychology behind adaptivism and inventionism shares a mutual goal, enlightenment, only the path towards it being different. Even the idea that penetration into a single aspect of the cosmic order is enough to experience enlightenment, is present, witness for instance Stephen Hawking’s famous statement that if we find a mathematic formula describing the origin of the universe we will ‘know the mind of God’.

Secondly there is the issue of completeness. Unlike language and mathematics, physics has no gerenal (in)completeness theorem. Of course, Heisenberg’s principle implies limitations, as does the chaotic aspect of system theory. But neither poses fundamental limitations on the descriptional powers of physics. So far there are no unspeakable things in science, although this is partly due to the fact that possibly unspeakable things (like the existence of God) are gladly declared to be outside the realm of science.

Finally, very curious and just as meaningful is the intrusion of an ethical component into science, not only in its application but in its method as well. This is the voice of Karl Popper: ‘The wrong view of science betrays itself in the craving to be right. For it is not his possession of knowledge, of irrefutable truth, that makes the man of science, but his persistent an recklessly critical quest for truth.’ In his treatise of scientific methods Popper not only highlights the adaptivist nature of science, but also tries to pin down its dynamics. Some methods are good, some methods are evil – not a distinction that is characteristic for the philosophical line. Popper’s opponent is, of course, Thomas Kuhn, who empirically observes that some methods are used and others aren’t, while at the same time issuing his own vision of science as an adaptivist process. The latter is clearly a more scientific view, in the sense that it observes and describes, rather than judges. It is also a less satisfying view in that it does not correspond to the ideal view many scientists hold of their own profession.

5 Interlude

Once again, the above is a (perhaps inexcusably) fast run through history, addressing but a few highlights to support the central argument. Much more could be added to the argument, on the battling-twin theme in Indo-European mythology, on Immanuel Kant’s classification of information, on the instructionist nature of number in infants or on numerous topics from cognitive psychology. For the moment, however, it should suffice.

What, hopefully, has been made plausible is the idea that there are three historically rooted notions through which humans arrange information. Also, it should be clear that these three do not exclude one another. Western humans do not rely on adaptivism only, but carry with them the heritage of earlier ideologies. They do not just use tools from primitive and oriental times, in the form of language and mathematics, they also employ the underlying ideas of instructionism and inventionism. This, inevitably, applies to scientists as well.

So, there is an amalgamated scheme of three abstract notions used by scientists to describe the subjects they deal with. The scheme is never explicit, scientists pick out the method they consider best under certain circumstances. Some things are described best using natural language, others require mathematics or a combination of both. The question what psychology drives the choice, is a wholly different subject. What matters presently is that all notions are present, ready to be called upon.

This is where computers come in. Computers are an invention of man. Hence, little imagination is needed to suppose that the representation of information in computers must be, in some way, be modelled after the informational methods of humans. Of course, many have been the reflections on the nature of knowledge and its acquisition before the computer era, but it was the digital machine that forced scientists to make things explicit. The next three sections argue that scientists, as they were looking for ways to programme their still unsophisticated computers, reverted to their own, partly unconscious model for information, as represented in the abstract scheme of instructionism, inventionism and adaptivism.

A final caveat before the actual discussion: when flaws in computer programmes are discussed in the following, only those are meant that are provoked by the features of the programming tools. Most ‘bugs’ are the result of programmers overlooking part of the reality they are trying to put in code. This is an entirely different issue.

6 Artificial instructionism

Computers are machines that perform mathematics. When lady Ada Lovelace, 150 years ago, wrote the first ‘programmes’ for the apparatus her friend Charles Babbage would never build, she called them ‘formulas’. Yet the chief programming tools are nowadays known as ‘languages’. Apparently, at some point the computer community decided their tools were more instructionist in nature than inventionist.

The theoretical model for the computer is the Universal Turing Machine (UTM), introduced in 1936 by Alan Turing as a model for a calculating human and as a tool for solving some questions still left open after Gödel’s theorem. The abstract UTM consists of a box that reads and writes symbols to an endless tape. Internally the box has a finite amount of states and the amount of possible symbols is finite too. The machine reads a symbol from the tape and, depending on its internal state, writes one back. Then the internal state changes and the tape is shifted one place forwards or backwards. If a certain predefined state is reached, the operation stops. Evidently, at any moment the behaviour of the machine is determined by its internal state and the symbol on the current position of the tape. Depending on its collection of internal states and the contents of the tape, the UTM will perform a certain calculation. In fact, the UTM, being universal, is capable of performing any calculation.

Although the UTM is a model from the world of mathematics, it is essentially instructionist in nature. The sequence of states a UTM goes through is not defined by mathematical necessity, but by the conviction of its programmer that this sequence conforms to the (mathematical) ideas in his mind. In other words, the Turing machine provides a grammar for mathematical statements, but it does not put any restriction on the relation between statements – the restrictions are brought in by the human ‘programming’ the UTM. The transformation on the tape depends only on the current state and symbol, not on the preceding history. As a consequence the UTM does not demand a consistent framework and allows ad hoc transitions (which may lead the UTM’s operation to go on indefinitely, or, in computer terms, to enter an endless loop). Exactly these properties are characteristic for instructionism.

The majority of computer applications is based on the UTM concept. All hardware is – the processor reads an instruction, performs an action, completely forgets everything and goes on to the next instruction, which may well contradict the former. Most programming languages (like Fortran, Cobol, C++, Pascal, Basic and Java) follow the same concept. Programmes in these so-called imperative languages are a sequence of instructions, where the internal coherence is primarily brought in by the programmer, not by the demands of the tools themselves. This is why they rightfully deserve the name ‘languages’.

As software pioneer Peter Naur, in the late 1950’s, wrote the definition of Algol, a language meant to be more structured than preceding ones, he took Wittgenstein’s theorem for a motto. It was another sign that the dilemmas of programming resembled those of language rather than of mathematics. Prominent linguists like Noam Chomsky, too, were intrigued and studied the correspondences between natural and computer languages. Though he found it impossible to fit Universal Grammar and the UTM into a single theory, Chomsky did make it likely that computer languages are inherently ambiguous. At least, there is no deterministic method to prove (un)ambiguity in specific cases, neither for languages nor for compilers. With the formal ideal out of sight, ‘damage control’ becomes the focus of interest.

Naur’s problem, in this respect, was essentially the same as Wittgenstein’s: how can we be most sure that what is expressed in lines of code is actually what we mean to say by it? Naur sought the solution in limiting the possible expressions in programming languages, the same approach the logical-positivists followed. In Naur’s time this led to structured programming, an approach still followed today. Attempts to prevent bugs by limiting the expressive powers of programming tools have continued since, and have enriched computer science with among others object oriented programming, specification languages and automatic proofs.

Applause for this development is not unanimous, however. For the more formal a language becomes, the more difficulty humans have using it. Wittgenstein therefore, in his later studies, abandonned the all too stern implications of his earlier work. Even so, in practical computer science a balance is sought between formal mechanisms that enhance certainty and intuitively understandable constructions that enhance transparency. After all, flexibility is not only a menace but also one of language’s most powerful qualities.

7 Artificial inventionism

At the time Turing came up with his UTM, another model for calculation already existed. It was Alonzo Church’s lambda calculus. The two are logically equivalent: things that can be expressed with the UTM can also be expressed in the lambda calculus, and the other way around. The essential difference, however, is in the way both achieve their results.

The lambda calculus is based on a limited set of axioms and production rules. From these all other lambda expressions are derived. Hence, all lambda expressions are an integral part of a consistent whole. It can hardly become more inventionist. In other words, the UTM calculates by transitions, the lambda calculus by substitutions. In practice the result may be the same, but conceptually the difference is more than significant. A transition means that something which formerly was not, now is, while a substitution implies a timeless equivalence between what was and what is. In yet other words, the UTM solves problems by defining a sequence of instructions that leads to the solution, while the lambda calculus does so by defining a space of correct solutions and then searching this space for a solution that meets certain criteria.

This difference is reflected in many practical issues concerning the declarative programming languages, notably Lisp and Prolog, that are associated with the lambda calculus. Declarative programming languages are known to be intuitively more difficult to understand than imperative ones. On the other hand, their structuredness has been pushed forward as a great benefit, as opposed to the informal approach of imperative languages.

On a more fundamental level, it can be shown that unlike an imperative programme, a declarative programme runs no risk to enter an infinite loop. For any lambda expression has a so-called fixed point, where further substitution does not yeald new expressions anymore and the process must stop. It is something completely different, however, to devise a general substitution process that is guaranteed to find the fixed point of any lambda expression or, for that matter, the desired solution of any programme. For instance, the desired solution to ‘1+1’ is ‘2’, but it is fairly easy to have a subsitution process that generates ‘3-1’, ‘4-2’, ‘5-3’, etcetera – all true but not the purpose of the calculation. In practice, therefore, declarative programmes do run the risk of entering a loop and delivering no answer at all.

Also, their timelessness demands that in declarative languages a variable, once it has been assigned a value, cannot change anymore. Changing a variable would mean disturbing the inventionist idea that information, once declared true, cannot be adapted anymore. However, only a selected group of true believers still sticks to this concept. In practice most declarative programming tools provide roundabouts for it, because it is unworkable and counterintuitive to humans, for whom time is such an important consideration. So, where the tendency in imperative programming is to tighten the demands to enhance structure, declarative programming tools tend to loosen the demands to keep them serviceable.

Perhaps it is a little bit too far-fetched, but it is at least curious that both human and artificial inventionism have been associated with the awakening of consciousness. Traditionally declarative programming tools have been linked to ‘artificial intelligence’ and the question whether computers can ’think’. To complicate things further, this question evidently is a matter of identification: is a human capable of viewing the computer as a coherent whole in the same way he sees himself as a coherent whole?

Hence, coherence becomes the focus of interest in this debate as well, just like it was during the reconstruction of mathematics following the discovery of non-Euclidian mathematics. This discussion was ended by Gödel’s theorem and not surprisingly the same theorem plays an important role in the question of computer consciousness. For the computer is a mathematical system and hence, following Gödel, cannot produce all true statements. A human, however, can, because of his meta-mathematical knowledge of the theorem. On the other hand, a computer provided with the theorem may also produce the meta-statements, resulting in a paradoxical, endless play at leap-frog that can be shown not to lead to anything specific.

As a result of the lack of mathematically convincing evidence, discussions in artificial intelligence are currently directed at the practical aspects of coherence, the possibility to prevent flaws in programmes, rather than at the more fundamental question what this coherence may eventually lead to. With the current state of knowledge it is simply impossible to move the matter of identification between man and computer (which is just as instructionist in nature as the idea that someone wearing a mask can actually be an evil spirit) to a more rational level.

8 Artificial adaptivism

Just like there is no clear theory underlying physics, there is no mathematical model underlying artificial adaptivism. There is, however, a common factor, to wit the use of iterations to reach a goal. For the idea of adaptivism is that ultimate knowledge should be reached through a process of approximation (or iteration, or convergence). The subliminal presence of adaptivism has caused the computer community to discern, despite the lack of an underlying theory, a clearly distinct third group of programming tools apart from imperative and declarative programming languages.

This group may be named ‘dynamic programming’. In dynamic programming the programmer does not define a sequence of instructions or a structured complex of substitutions, but an initial set of information entities and a process that determines in what direction the set develops towards a solution of a problem. The adaptivist nature of this approach should be clear. Furthermore, because the programmer defines a process rather than a structure that must lead to results, it is less straightforward to retain control over these results.

Neural networks and genetic algorithms are the best known branches of dynamic programming. Neural networks use an iterative process to establish the general relation between selected pairs of input and desired output of a certain problem. Once this learning process is over, neural networks are able to give adequate, though hardly ever exact output to any input of the problem. They are popular in problems of pattern recognition that are too complex to allow an analytical approach. Genetic algorithms perform random mutations on a set of input strings, evaluate the resulting strings and breed further with the ones that are closest to a certain criterium. The process stops after a fixed numer of iterations or once a string has been found that meets the criterium for an acceptable solution. Since their randomness may lead to surprising jumps, genetic algorithms are excellently suited for searching complex solution spaces into directions that may otherwise have been left uncovered.

Because of their longer history most theoretical research on adaptivist programming methods has centered on neural networks. They have been shown to be mathematically equivalent with the UTM and lambda calculus. On a more abstract level it is possible to derive a general theory of adaptivist programming methods, based on iteration theory, which, however, still is in need of a thorough mathematical foundation.

When adaptivism becomes philosophical it tends to reflect on the approximation of the ultimate goal, rather than its existence (which is either taken for granted or regarded irrelevant, because steps in the right direction are just as important). Hence neural networks, with their inspiration from neurophysiology, initially fuelled the old ‘can computers think’ question, but have also seen it fade away.

Of more general concern is a subject related to system theory, namely the fact that the non-analytical methods dynamic programming tools provide to convey results, make them difficult to understand. The results are derived through an interplay of units which in isolation do not seem to be meaningful at all. A single neuron or genetic string has nothing to say about the whole it is part of. As in system theory, the lack of understanding makes it very difficult to investigate convergence, that is the degree to which one may be sure that the approximation process is actually going into the right direction. Consequently, as in the general philosophy of science, the discussion tends to be more abstract than the ones in instructionism and inventionism.

8 Conclusions

The above is but a rough framework. Nevertheless, enough evidence is available to argue that the triplet of instructionism, inventionism and adaptivism is a useful tool to understand the implicit ways in which humans categorize systems for the representation of information. Also, it is possible to trace the evolution of this scheme, summarized in Table 1, from prehistory to modern days.

Table 1 Instructionism Inventionism Adaptivism
Structuredness No structure Permanent structure Dynamic structure
Cultural origin Primitive Oriental Western
Human tool Language Mathematics Physics
Limiting theory Wittgenstein Gödel (Heisenberg)
Programming tool Imperative Declarative Dynamic

Summing up quickly, instructionism is a method of information representation, where the relation between the information and the part of the real world it echoes, is not limited by formal demands. Its counterpart, inventionism, on the other hand demands a formal structure that is both all-encompassing and immanent (where the fact that this is seldomly achieved in practice is less relevant). Adaptivism, one could say, is a compromise between instructionism and inventionism. It shares the quest for structure with inventionism, while at the same time acknowledging the existence of practical limitations and hence contenting itself with imperfect structures for the time being.

It is interesting to see that several debates in the philosophy of science, although quite different in appearance, can be traced back to dilemma’s concerning the demands on how information is represented. Given examples are the discussions on ambiguity versus structure on the border between linguistics and mathematics, the Popper/Kuhn debate on the ideal evolution of scientific knowledge, and the controversy around computer consciousness.

But most of all it is intriguing to be able to resolve the scheme at all, to find that with such a fairly simple categorisation it is possible to describe so many, widely different phenomena in a coherent way – while at the same time having to conclude that there is hardly anything new since the Bhagavadgita.

This article appeared in ‘Interdisciplinary Science Reviews’, december 1999. For references, see this publication.