Mathematics and Computers

Mathematics and Computers

One can often hear that the computer revolution on the boundary of the XX and XXI centuries was due to the achievements of modern mathematics, and, in particular, that the major trends of software development merely implement some abstract mathematical ideas. Despite of all the apparent evidence, including the memories of the great programmers, this viewpoint can hardly be accepted without reserve. There is no reason to trust the subjective accounts of IT gurus more than the feelings of any other person. People rarely pay attention to the hidden motives of their activity; all they can report is a number of transitory goals, a partial representation of the cultural background they normally don't observe (as the attempts to shift the focus of attention to the conditions of one's activity will terminate that very activity and unfold the activity of reflection). When a programmer insists that some product was inspired by a beautiful mathematical theory, this is but a superficial impression, while the real source of inspiration is to be sought for elsewhere. Quite often, mathematical considerations are added in retrospect to an already available practical scheme, as a kind of justification or promotion, and, indeed, no computer program exactly follows its mathematical "prototype". Programming is a practical discipline that cannot be reduced to pure mathematics, even in the guise of "computer science".

The very words 'computation', 'operation' and 'algorithm' are said to come from mathematics, and this is admittedly an argument in favor of the primary role of mathematical science in computing. Even assuming the validity of such attributions, terminological history has nothing to do with the origin of the corresponding notions; the same notion can be expressed in quite different terms, or go without any verbalization at all. But actually, the tales about the mathematical roots of the fundamental concepts related to computing are mere fantasy. At a closer examination, one would rather suspect exactly the contrary: mathematicians invent symbolic notation for something that has already become a common practice, a part of our cultural experience and, in a way, our everyday life. Long before any formal "proof", people used to convince each other by the very way of action, and it is only much later that the typical "proofs" have been systematized and codified under the name of formal logic, which, in a couple of millennia, has been further truncated to what we know as mathematical logic. But if you scoop up a little water from a well to slake your thirst, the well is still there, regardless of your scoop of water, and the folks may use it in many other ways. Some alternative notions of proof have already been assimilated by the mathematical thought; many more are yet to be discovered, without any diminishing the importance of whatever already known. Similarly, the idea of an algorithm, a formal prescription for the solution of a class of similar problems, has existed in human culture from the most ancient times, long before the first sprouts of mathematical science. In the human society (and in higher animals), any successful action tends to promptly become a model for other actions; primitive people did not clearly see the reasons of efficiency, and they had to stick to superficial details, fixing the form of action regardless of its real content; this rigid construction was then imposed on all the members of the community by any authorities and sanctified by the priests, thus leading to a social norm, a common pattern later reflected in the arts, science and philosophy. These formal prescriptions were absolutely necessary on the earlier stages of human development; they play an important role in the modern culture as well, supporting its stability and the congruity of its evolution. However, a creative person would not exaggerate such formalities, however productive. They are always restricted to specific social conditions, a certain level of cultural development; a slightest novelty inevitably breaks the rules. That is why mathematics has never followed the algorithmic line in its own development, and even less can it advise it to programming.

Step-by-step instructions exist for many practical activities far from mathematics. Take any drawing class, or a dance school, for an example from the arts; a quick user guide is available for any home appliance, and there is no professional training without learning a number of routine operation sequences. However, the algorithmic component can never prevail in ordinary life, which is full of unexpected turns demanding an immediate creative response. It is only simple artificial objects like mathematical constructs that extensively allow formal operation; any real object is much more complex than the most intricate mathematical phantasm. Still, in many practical cases, we can control the level of relevance, controlling just a few principal traits and compensating the rest as side effects. This is what we explicitly do in programming.

The power of mathematics comes from oversimplification. It prompts us to adjust our activities to match the level of simplicity marked by formal mathematical constructs thus making the whole thing simpler and better tractable; of course, the actual complexity does not go away, but it can effectively be pushed into the background, to the lower levels of hierarchy. That is, the practical value of mathematics is to hierarchically structure our everyday life, which opens new perspectives for efficient algorithmic procedures, which suggest more abstractions, and thus ad infinitum. Here, we are facing the old issue of the egg and the hen. Cultural progress gets reflected in mathematical theories, which, in their turn, stimulate certain cultural changes. Such circularity is a characteristic feature of any development at all, and it may seem that the idea of a special role of mathematics thus gains additional points. Well, in a way, this is a valid description of an old-style theoretician, who would not care for anything beyond fundamental science (as long as there is no issue of procuring the required material support). However, cultural progress is currently getting so fast that there is no time to develop a solid mathematical background for any operational regularity, and one has to either proceed on the trial-and-error basis, or adapt some existing mathematics, being fully aware of the transient and approximate nature of such ad hoc schemes. In this mutable world, mathematics is expected to provide principles rather than solutions, thus coming much closer to social science, with practically nothing to compute. Similarly, computers have evolved from mere computing machines towards universal control devices, with any numerical calculations mainly reserved to the presentation level. In this respect, modern computers are much closer to the ancient mechanical and hydraulic toys, which have later passed their experience to the industrial automated production lines.

The idea of a computing device came as a natural continuation of all the preceding technological development, as rational reasoning was getting ever more formalized within the first advances of the institutionalized science of the modern type. This algorithmic approach to human creativity was promoted in the early Utopian writings, and it grew very popular in the arts long before any scientific applications. Just recall the well-known dice-driven musical composition algorithm by Mozart, as well similar schemes by his contemporaries and predecessors. However, the Pythagorean tradition of treating mathematics as a kind of art has later been reversed, and many artists were apt to believe that the inner integrity and harmony of the arts is due to some mathematical laws a priori built in the human nature.

Anyway, automating the routine operations of a mathematically laden scientist of the XIX–XX centuries was quite an obvious suggestion, given the habitual presence of manual "computers" like abacus and mechanical adding machines. All one needed to do was to supply a mechanical device with an independent source of power and feed it a program in a manner well habitual in mechanical pianos or textile industry. The very fact that the practical implementation came rather late indicates the auxiliary role of mathematics (and formal science in general) in the history of computers; the progress was primarily due to hardware development, new processing technologies rather than processing rules. And it is these empirically found operation modes that shaped mathematical programming, not the other way.

Though modern computer science is a brainchild of the "digital revolution", analog computing has never lost its practical importance and conceptual significance. Discrete mathematics is a relatively narrow branch of mathematics in general, while the idea of approximation rests in the core of any science at all. Both ways are equally productive: discrete algorithms are often approximated by some smooth data flow, as well as continuous processes get discretized in their digital models; computer modelling is absolutely dominating over numerical calculations in modern science and industry. Neither of the opposites can live without the other. To listen to digital music, we need an analog device, while discretized music is much more tractable and safer to store. In this context, the traditional musical notation and the tradition of creative performance could be considered as a prototype of modern information technologies. Eventually, any computer is an analog device, albeit used in a digital manner.

Here is where we come to the point. Mathematics is essentially static; it studies the structural aspects of human activity. On the contrary, computing is all about time; without intentionally ordering events, one just could not speak about computation (operation). To bridge the gap, one needs to somehow introduce time in mathematics and structure in programming. However, mathematical time cannot be but yet another structure; similarly, programmatic structures can only be meaningful in a dynamic sense, as operation types. That is, as soon as we have established any correspondence between mathematics and computing, it must inevitably be broken, to start a new cycle of reflection.

To illustrate this, let us turn to the procedures of measurement. There is an important difference between space and time. Spatial dimensions are static (structural), and we need to deliberately move in space, to be able to evaluate any distances. Not so with time. Moreover, to accurately measure time, we need to stay as still as possible, to avoid the influence of any spatial displacements onto the indications of the clock (possibly, in the form of gnomon, hourglass, or clepsydra, or even a calendar, for longer periods). In a way, this is the very meaning of the words 'space' and 'time': space is what can be taken at the same time; time is what happens in the same place. No relativity considerations will cancel this fundamental distinction, as they can only refer to the specific numerical representations (modes of measurement). Considering the relativistic interval, rather than the separate measures of space and time, is merely a reformulation of the same trivial statement: moving in space, we introduce certain errors in time measurement, while lack of simultaneity results in slightly distorted spatial relations.

However, as soon as we choose a particular device to measure time, we introduce a specific structure, a temporal scale. All the local events can then be related to the marks on this scale, as if they were taken simultaneously like the points of some space. We can formally combine this scale with the former spatial dimensions and study the geometry of the resulting space. But this immediately poses the problem of comparison, with the evolution of thus obtained geometrical structure producing some other time, which would produce a different time scale, and so on. Both space and time become hierarchical; this hierarchy can be represented with many hierarchical structures, depending on the chosen modes of measurement.

A margin note: in the early science, any time scale implied a spatial implementation in the form of a dial-plate; that is, the moments of time were labeled with some spatial positions. The exact shape of the dial does not matter; just for one example, take the usual watch face with a few turning hands related to the different time scales (in this particular case, one time scale can, in a way, be reduced to another; this not generally so). Today, time indication can be rather sophisticated: instead of single spatial positions, time marks may involve various spatial distributions (by definition, taken at the same time). Still, in any case, a time mark is a virtual activity developing "in no time" on the current scale; namely, the process of measurement (mark-up).

Now, with computing (understood in the most general sense as a series of data control operations), the situation is exactly the same. At every step, one structure is to produce another; however, without a particular time scale, there is no way to define any structures at all, since the "simultaneous" events of one level may unfold in temporal sequences on another, and vice versa. To structure computing (for example, to distinguish the initial data, processing and the result), one needs to compare it with some other process (benchmarking). Any structure is only meaningful within a higher-level structure; any sequencing implies folded lower-level activities. The hierarchical organization of human activity is reflected both in mathematics and information technologies, as well as in any other cultural area.


[Mathematics] [Science] [Unism]