The Future in Java?
For a developer, the choice of a programming language does not make much difference, as they all are based on the same logic, and the code written in one language can as well be interpreted in any other. Any preference is dictated by the business environment and personal inclinations rather than any real advantages. There was time when most Web sites were in English—just because of the American origin of the technology and the burden of tradition; as soon as the Unicode has become widespread, localized content has quickly forced English out of the national networks. Today, with free automatic translation tools (however imperfect), one can browse any Web resources without necessarily knowing the language; on the other hand, constructing a multilanguage site does not involve too much programming overhead.
In 1990s, Java was pompously presented as the universal language of the future that would take the place of all the other languages, as computer processors were expected to directly implement Java byte code, getting rid of the traditional instruction sets. Luckily, this idea has proved to be commercially unpractical, it was strongly opposed by the major hardware producers, and the future in Java has become a mere recollection of a nightmare of the past.
Indeed, the language is in no way an attraction. It lacks elegance and scarcity of expression; it demands much work-around programming to overcome its inherent rigidity; it is not extensible enough to incorporate the expressiveness of other languages; it is not intended for any customization. When all the richness of the real world is reduced to a single construct (a class), what else could be expected?
Today, the idea of an intermediate level between machine code and programming languages is almost a commonplace. Most modern languages assume conversion to a kind of "byte code" that could be further compiled or interpreted. The portability of this byte code determines the portability of applications and services. In early 1960s, this idea was extensively studied in the USSR. A unified low-level syntax has been suggested for the Soviet computers of the time, while applied programming was to use a family of specialized high-level languages designed for efficient coding in specific application areas. These languages were simply labeled by the letters of the Greek alphabet (with Alpha and Epsilon being the most popular). Much later, the same idea came up as the .NET platform by Microsoft, with many programming languages (C#, Visual Basic, F#) being translated into the same intermediate language and sharing the same libraries.
There is no such thing as perfect portability, as well as a perfect environment or development platform. A good code can be produced in any language, while no environment can guarantee an exceptional quality. Ideally, each developer would collect an individual toolkit combining heterogeneous elements from any languages, platforms and environments. All that might be integrated using a hierarchy of interfaces to ensure concurrence and data exchange. The only objective basis for interoperability and portability is in the common organization of the practical tasks, the general principles of computer architecture, the purpose-oriented level of communication protocols, as well as the typical data formats.
The demand for flexible and personalized (customizable) programming gave life to various language extenders (pre-compilers, embedded script etc.). The progress in automated compiler design will probably return us to early days of VAX VMS, with its highly customizable language definitions. The hierarchical organization of languages and scripts, and the ideology of middle-code in particular, is to simplify this task. Well, after all, Java was a step in the right direction, though its inherent commercial rigidity entirely spoiled the effect and hindered further development. In this sense, there was some future in Java too, along with the other contenders, rather than against them.