Niklaus Wirth sent me this transcript of an interview he recently gave to a
European journal. Although you may disagree with some of Wirth's
conclusions, you might nevertheless find this interesting.
1) You are one of the most influential academics. Your work had a great impact both on the academia and on the practice of software development. However, in many cases the universities and the "real world" are, indeed, two worlds. When I talk with a professor and with a programmer, I see far too often that they think about software in very different ways. A recent IEEE survey of "what lie ahead in software" revealed basically no intersection between the opinions of academics and practitioners. As one of the few whose work has been influential in both fields, what's your opinion (or perhaps your secret)?
> If there is a "secret" at all, then it is that of being both a programmer >and a professor. The division is rather unfortunate and the source of many problems. Professors typically spend their time in meetings about planning, policy, proposals, fund raising, consulting, interviewing, travelling, etc. etc., but relatively seldom at their "drawing boards". As a result, they lose touch with the substance of their (rapidly developing) subject, they lose the ability to design, they lose sight of what is essential, and resign to teach academically challenging puzzles.
I have never designed a language for its own sake, but always because I had a practical need for it that was not satisfied by available languages. For example, Modula and Oberon were byproducts of the designs of the workstations Lilith (1979) and Ceres (1986). My being a teacher had a decisive influence on making language and systems "as simple as possible" in order to let my teaching concentrate on the essential issues of programming rather than on details of language and notation.
Yes, the drifting apart of practice and academia is unfortunate.
2) You probably know about the "good enough software" concept popularized by Yourdon. In many senses, it's just a rationalization of what's happening in the software world: the first company hitting the market with a feature-rich product is more likely to win the battle than the careful, quality-seeking company. Do you think there is anything developers and software factories can do about that? I guess many developers would be happy to be given more time to develop better software, but the company ought to survive too. "Educate the users" seems more a wild dream than a possibility...
> The fallacy is that "good enough software" is rarely good enough. It is a sad manifestation of the spirit of modern times, in which an individual's pride in his/her work has become rare. The idea that one might derive satisfaction from one's successful work, because that work is ingenious, beautiful, or just pleasing, has become ridiculed. Nothing but economic success, monetary reward is acceptable. Hence our occupations have become mere jobs.
But quality of work can be expected only through personal satisfaction, dedication and enjoyment. In our profession, precision and perfection are not a dispensible luxury, but a simple necessity.
3) As you know there is a large debate about "software engineering as a profession". In fact, so many peoples working in software development never had a strong education, or any significant experience. Do you think software engineers should be licensed as other engineers? Should something be changed in the curricula of software engineers/computer scientist to make them more effective? What is in your opinion the "ideal" education a software engineer should be given?
> Recently I read a final report of a research project funded by the Swiss >National Science Foundation. The project's naive goals were identified as follows:
1. How can easy programming be achieved (in particular, for non-experts)? 2. How can a mechanism be realized which allows hiding the difficult parts of parallel programming.
After more than 30 years of programming we ought to know that the design of complex software is inherently difficult. This in spite of the fact that for decades industry has been advertising programmers' positions by claiming that "programming is easy". Later on, when doubts arose even to the advertisers, they switched to promising a wide variety of tools to facilitate the arduous tasks. Tools became the slogan; the right tools, paired with clever tricks and serious managenment methods would work wonders. Dijkstra then called Software Engineering "Programming in spite of the fact that you can't".
Indeed, the woes of Software Engineering are not due to lack of tools, nor of proper management, but largely due to lack of sufficient technical competence. A good designer needs to rely on experience, on precise, logic thinking, on pedantic exactness. No magic will do.
In the light of all this it is particularly sad that in many informatics curricula "Programming in the Large" is badly neglected. Design has become a non-topic. As a result, software engineering has become the eldorado for hackers. The more chaotic a program looks, the smaller the danger that someone will take the trouble of inspecting and debunking it.
4) Speaking about education, many peoples think that it's easier to learn the object oriented paradigm if one had no previous exposition to another paradigm. This seems a big mistake to me, as an experienced developer should have a knowledge of other paradigms as well. In my opinion, early software engineering works (like Parnas's papers) and programming classics like yours Systematisches Programmieren are as much useful today as they were years ago. What's your opinion?
> Many people tend to look at programming styles and languages like religions: If you belong to one confession, you cannot belong to others too. But this analogy is another fallacy. It is maintained for commercial reasons only.
Object-oriented programming (OOP) solidly rests on the principles and concepts of traditional procedural programming (PP). OOP has not added a single novel concept, but it emphasizes two concepts much more strongly that was done in PP. The fist such concept is that of the procedure bound to a composite variable called object. (The binding of the procedure is the justification for it being called a method). The means for this binding is the procedure variable (or record field), available in languages since the mid 1970s. The second concept is that of constructing a new data type (called subclass) by extending a given type (the superclass).
It is worthwhile to note that along with the OOP paradigm came an entirely new terminology with the purpose of mystifying the roots of OOP. Thus, whereas a procedure used to be activated by "calling" it, one now "sends a message to the method". A new type is no longer built by "extending" a given type, but by defining "a subclass which inherits its superclass".
An interesting phenomenon is that many people learned for the first time about the important notions of data type, of encapsulation, and (perhaps) of information hiding when introduced to OOP. This alone would have made the introduction to OOP worthwhile, even if one didn't actually make use of the essence of OOP later on.
In a way, OOP falls short of its promises. Our ultimate goal is extensible programming (EP). By this we mean the construction of hierarchies of modules, each module adding new functionality to the system. EP implies that the addition of a module is possible without any change in the existing modules. They need not even be recompiled. New modules not only add new procedures, but - more importantly - also new (extended) data types. We have demonstrated the practicality and economy of this approach with the design of the Oberon System .
5) Recently I come across an advertisement for Borland Delphi (which as you know is a sort of o.-o. Pascal with extensions for event handling) which said "Delphi 2.0 gives developers a language almost as readable as BASIC...". Apparently it was a quote from a PCWeek review of the product. But it sounded so terribly wrong to me. "almost as readable" as a language without a sound notion of data type? On the other hand, we cannot hide the fact that to a large extent, basic (visual basic) has won on the market, and is probably the first example of a commercial language with a huge market of components. As the father of Pascal, what is your opinion? Did basic really win? If so, why?
> We must be careful with words like "readable", "user friendly", etc. They are vague at best, and often refer to taste and established habits. But what is conventional need not necessarily also be convenient. In the context of programming langauges, perhaps "readable" should be replaced by "amenable to formal reasoning". For example, mathematical formulas are hardly what we might praise as easily readable, but they allow the formal derivation of properties that could not be obtained from a vague, fuzzy, informal, "user friendly" circumscription. The construct
WHILE B DO S END
has the remarkable property that you may rely on B being false after the statement's execution, independent of S. And if you find a property P that is left invariant by S, you may assume that also P holds upon termination. It is this kind of reasoning that helps in the reliable derivation of programs, and that dramatically reduces the time wasted on testing and debugging.
Good languages not only rest on mathematical concepts which make logical reasoning about programs possible, but they rest on a small number of concepts and rules that can freely be combined. If the definition of a language requires fat manuals of hundred pages and more, and if the definition refers to a mechanical model of execution (i.e. to a computer), this must be taken as a sure symptom of inadequacy. But alas, in this respect Algol had in 1960 been far ahead of most of its successors, in particular of all those that are so popular today.
6) Another very popular language is C++. I know you are not particularly fond of it, and that in many cases, a safer language could be better. However, sometimes I wonder if it wouldn't be wiser to help programmers instead of battling them. For instance, in many cases C++ programmers would be happy to use a safer version of the language: not all of them are so concerned with 100% compatibility with C. A version of C++ were pointers and arrays are clearly separated, and were you get a warning when (e.g.) you assign a float to a long, and so on, would help them to write better programs and still not require to learn a completely new language. I understand that it is more pleasing to design a pure language than trying to make a fragile one safer, but then, if just a handful of peoples use this pure language, are we really advancing the state of the software development?
> My duty as a teacher is to train, educate future programmers. In trying to do this as well as possible, I present fundamental notions as clearly and succinctly as possible. I certainly do not let an inadequate notation hinder me in this task. If students have grasped the important ideas and have gained a certain versatility and familiarity with the subject, they find no difficulty in adapting to other languages if required (although they typically complain about the new inconveniences). I do not see why anyone would call this "battling programmers".
One may indeed wonder why nobody in the vast software industry has undertaken the task proposed by you: Defining a safe subset of C++. I can figure out two reasons: (1) The software world is eager for "more powerful" languages, but not for restrictive subsets. And (2), such attempts are doomed to fail just like attempts to strengthen the structure of a house built on sand. There are things that you simply cannot add as an afterthought.
7) To conclude: isn't the software development community focusing too much on technical issues, forgetting that software development is mostly a human activity? For instance, I think that one of the reasons for the popularity of BASIC and C is that their relative lack of constraints allows for some "local solutions" (euphemism for patch :-) to be introduced late in the development cycle. We all know that we should carefully design software before going to code. But we also know very well that (in most cases) management does not want to pay _now_ for _long term_ benefits (this is one of the reasons some peoples are not so happy with OOP). Hence, software is routinely patched when already in advanced state of development, and a language which allows that to be done without too much worries will be more widely used than one which requires a large investment in upfront design. But then we continue to bash programmers for being "dirty" when in fact they are just playing in the real world, not in an ideal world. Wouldn't be better to think about programming languages with more consideration for human issues?
> I remember a long discussion in an academic seminar in the mid 1970s, when the word "software crisis" was in full swing and the notion of correctness proofs of programs was put forward as a possible remedy. Professor C.A.R. Hoare, the speaker, had eloquently presented the principles and the advantages of correctness proofs replacing testing. After a long discussion about the pros and cons Jim Morris got up and disarmingly asked: But Tony, what is your answer if we frankly confess that we dearly love debugging? You want us to abandon our most cherished enjoyment!
20 years later we know that correctness proofs haven't much influenced programming practice at large. Workers still happily enjoy debugging, armed with ever more sophisticated (and undebugged) tools. Is this what you call "consideration of human issues"? Everybody wants progress all right, but only if it allows them to retain acquired habits.
1. N. Wirth and J. Gutknecht. Project Oberon. Addison-Wesley, 1992, ISBN 0-201-54428-8.
dated 31st January 1997