Modularity and Integration
In the world of market competition, big software companies are primarily preoccupied with their income, and any user convenience is evaluated in the millions of bucks it might bring. As a result, commercial software is updated in a rush, new features being introduced without any real need, just to prevent the competitors from seizing the niche. Formerly friendly applications, from one version to the next, become increasingly cumbrous and heavy, and those few frequent operations the majority of the users knew from the whole store of options and valued so much get less and less accessible, as the interface is adapted to an entirely different technology.
One could object that complex tasks require as complex software, and the diversity of possible applications explains the clumsiness of modern products. This is the price to pay for flexibility and power. For instance, one user would treat MS Office as an enhanced typewriter, while somebody else will need it as a presentation tool, and yet another is happy with the tools of Web integration. Most users will hardly ever use built-in scripting, while it is an absolute must for application developers.
Unfortunately, this road leads to a dead end. Any integrated environment is only integrated within itself, while different toolkits are poorly compatible. This is a direct consequence of market competition, as the producer is obviously not interested in sharing the proprietary technologies to ensure 100% compatibility. The objective necessity of unification forces the competitors talk to each other; however, all they can do is to set up a common communication framework, incorporating standard protocols, open data formats and basically compatible scripting tools. All that links the different products in an outer manner, without much influencing their native ideology.
A dream environment would uniformly adopt any special applications, regardless of make, so that the users could easily install and uninstall any modules any time they need, dynamically configuring the system. Such a flexible system would only suggest a limited number of most frequently used tools, somewhat resembling the logic of smart menus, with only the recent items shown at any moment, the rest being expandable on demand. That is, the user should not have to perform a voodoo dance every time one needs to add a new feature, and pray to keep it work ever afterwards; all one would need is to select a feature and confirm the installation command. The new tools should be automatically integrated in the already existing interface.
Yes, computers do develop in that direction, and many applications can be loaded from an online store and installed in the background, especially on the mobile devices. But this is not true integration, since the same application has to be individually redesigned for each of the commonly used operating systems to become multiplatform. The integrity of the platform is preserved by any means.
With a powerful enough computer, one can install several virtual machines interconnected by a kind of local network and let each platform run in its own virtual environment. However, this is an eclectic solution that cannot provide much flexibility. Moreover, one needs to learn at least the basics of several operating systems instead of talking to computer one's individual way. Within the same platform, some integration features are practically feasible. The Windows XP mode under Windows 7 is certainly the most impressive implementation ever; the unity mode in VMWare Workstation is yet another example, though with a much lower level of integration. However, data formats and referencing are too different in different operating systems, and such integration tools cannot be universal enough. Ideally, the user should be able to grow one's own operating system from independent building blocks, never bothering about their compatibility.
To put it bluntly, assume that I need that feature from Windows, another feature from Linux, some feature from VAX VMS, and a trick from AS/400, as well as a flavor of Macintosh and a tint of Android. In addition, I would like a number of tools from Photoshop combined with some GIMP behavior, and a text editor combining the elements of MS Word and Scientific TeX, compatible with PDF in both directions. Can I install only the features I need? As of today, the answer is no. If each component of each operating system (or other software) were separable from the rest and seamlessly combinable with the other modules, everybody could have the system of one's dreams, implementing only required functionality in a uniform interface without any redundancy.
Of course, that would require an entirely new approach in software development. First, functionality is to be fully separated from implementation. This requires a universal language to describe the needs of the user. I mean truly universal, and not merely imposed an arbitrary standard. That is, an entirely new paradigm that nobody could fancy a decade ago should be expressible in the same language, without any need of inventing a better one. In this sense, the universal computer language must be like natural languages, which adapt to any cultural changes for centuries. The implementation of any new functionality will therefore translate the universal description into a lower-level language suited for specialized software design. Obviously, this choice is not unique, and it may lead to different implementations, provided the original user requirements are kept intact. Different companies can use their proprietary technologies as they like, provided that their products, equipped with all the necessary adapters, can talk to the user in the common language. This is basically equivalent to a family of dynamically configurable virtual machines using the same integration platform.
The elements of this approach gradually penetrate the minds of programmers, as the popularity of declarative languages and functional programming grows. One could also mention various Web development platforms and content management systems that significantly enhance the flexibility of industrial programming. The possibility of independent parallel usage is a prototype of the new style of computing. However, for universality, such tools must also admit translation from one platform to another as well as freedom of borrowing and combination.
Obviously, true universality will have to cover all the practical areas, including unification on the hardware level. Today, it may be difficult to get the drivers for an old video or sound card, or a printer manufactured a few years ago. When Linux entered the competition with MS Windows, it tried to win over new adepts by proud declarations about more hardware compatibility; unfortunately, such declarations had nothing to do with reality. There are people (like me), who do not care much for novelty, provided they have all the functionality they need; they love old software they used for decades, and they would like to keep on using it, with all the newest operating systems to come. Similarly, a decade-old equipment is not always utterly unusable; for some reasons, one may need to exploit old hardware along with newer computers. To ensure that, a universal store of hardware and software adapters must be created; a hierarchical organization could make it compatible with any future development, so that we do not lose a bit from our cultural heritage.
To conclude, the development of computing is to establish a universal communication platform allowing for absolute flexibility of individual computer systems. Anything will be combined with anything in the world of computers thus increasing our creativity beyond any limits. This is the ideal of modularity; but this is also the principal definition of consciousness. Who knows, maybe the universality like that will mean computers developing to a level above mere intelligence, towards a kind of reason?
|