Month: March 2015 Page 1 of 3
To the best of my knowledge it was not industry that invented the term “computer literacy”, but society. Quite early in the game it was firmly implanted in the people’s minds that computers were there to save our economies (and later also the economies of the countries of the third world). For a while, no one knew how, but fortunately someone hit upon the bright idea that information was “a resource”, so that solved that problem, the more so since, for the first time in the history of mankind, we had now a valuable resource of infinite supply. Then people remembered how oil had created immense riches for a happy few; this time that should not happen again, so “computer literacy” for the millions was invented to solve that problem. What that term should mean is still an open question, but that did not prevent our visionaries from happily proceeding to invent the post-industrial society in which people, if not happily teleconferencing, would devote their ample leisure time to creative video-games.
In Lisp’s quest for industrial respect, it abandoned its traditional customers. Symbolic algebra developers (e.g., Maple, Mathematica) forsook Lisp for the more portable and efficient C. Logicians gave up Lisp for Prolog and ML. Vision researchers, seeking the highest performance from parallel computers, dropped Lisp in favor of C, Fortran (!) and various fringe parallel languages. AI companies were forced to move their expert systems into C, C++, Ada and Cobol (!) because of the large size, poor performance and poor integration of Common Lisp systems. Academicians teaching undergraduates chose Scheme and ML for their small size, portability and clean semantics.
The Multinet would permeate society, Lick wrote, thus achieving the old MIT dream of an information utility, as updated for the decentralized network age: “many people work at home, interacting with coworkers and clients through the Multinet, and many business offices (and some classrooms) are little more than organized interconnections of such home workers and their computers. People shop through the Multinet, using its cable television and electronic funds transfer functions, and a few receive delivery of small items through adjacent pneumatic tube networks … Routine shopping and appointment scheduling are generally handled by private-secretary-like programs called OLIVERs which know their masters’ needs. Indeed, the Multinet handles scheduling of almost everything schedulable. For example, it eliminates waiting to be seated at restaurants.” Thanks to ironclad guarantees of privacy and security, Lick added, the Multinet would likewise offer on-line banking, on-line stock-market trading, on-line tax payment–the works.
In short, Lick wrote, the Multinet would encompass essentially everything having to do with information. It would function as a network of networks that embraced every method of digital communication imaginable, from packet radio to fiber optics–and then bound them all together through the magic of the Kahn-Cerf internetworking protocol, or something very much like it.
Lick predicted its mode of operation would be “one featuring cooperation, sharing, meetings of minds across space and time in a context of responsive programs and readily available information.” The Multinet would be the worldwide embodiment of equality, community, and freedom.
If, that is, the Multinet ever came to be.
– M. Mitchell Waldrop, “The Dream Machine”
AK You have to be a different kind of person to love C++. It is a really interesting example of how a well-meant idea went wrong, because [C++ creator] Bjarne Stroustrup was not trying to do what he has been criticized for. His idea was that first, it might be useful if you did to C what Simula did to Algol, which is basically act as a preprocessor for a different kind of architectural template for programming. It was basically for super-good programmers who are supposed to subclass everything, including the storage allocator, before they did anything serious. The result, of course, was that most programmers did not subclass much. So the people I know who like C++ and have done good things in C++ have been serious iron-men who have basically taken it for what it is, which is a kind of macroprocessor. I grew up with macro systems in the early ’60s, and you have to do a lot of work to make them work for you—otherwise, they kill you.
What I saw in the Xerox PARC technology was the caveman interface, you point and you grunt. A massive winding down, regressing away from language, in order to address the technological nervousness of the user. Users wanted to be infantilized, to return to a pre-linguistic condition in the using of computers, and the Xerox PARC technology`s primary advantage was that it allowed users to address computers in a pre-linguistic way. This was to my mind a terribly socially retrograde thing to do, and I have not changed my mind about that.
An even harder problem is to convince the software industry to build software out of carefully designed components. What I see, however, is the movement in the opposite direction. Hand-crafted, one-off, undisciplined code is impossible to replicate. Adobe did a fabulous job specifying Postscript; that allowed Peter Deutsch to single-handedly produce Ghostscript. Now Adobe is not going to specify Photoshop’s behavior. Let the Gimp guys try to replicate it. While Linus Torvalds was able to replicate Unix from the carefully written System V interface definitions, no one could replicate Windows: being nonstandard creates barriers to entry. There are grave economic reasons making any progress unlikely while undisciplined programmers generate huge amount of capital. It’s analogous to the programmer whose terrible spaghetti code gives him job security, since no one else can understand it.
It’s interesting to see what music the baby sleeps to. Various successes so far include the Coraline soundtrack and the Ride of the Valkyries (!)
I’ve known for quite some time that I suffer from a habit that is common to many programmers. When I finish a challenging part of a program – after having spent an awful lot of time looking at incorrect output of one form or another – I find myself so surprised that things are working that I then spend a silly amount of time rerunning the program and admiring the correct output. It’s almost like I don’t believe that it’s really correct. When I eventually pull myself away from the programming equivalent of navel gazing, I find myself in a familiar, but unappealing, situation: I have a functioning, but incomplete system, and I need to add the next feature on the list to it. But adding another feature means that everything is going to stop working properly, and it’s going to take an awful lot of work to get back to another point where the system will be working properly. And so I stare and stare at the screen, semi-paralyzed by the thought of breaking my perfectly functioning (if incomplete) system. Of course, eventually I pull myself together, make the first stages of the necessary change for the new feature and then I’m away, only for the cycle to repeat itself when the system reaches its next stable state. I’ve come to realise that all I’m going through in such instances is the programming equivalent of writers block; the fear of setting pen to paper for fear of starting down the wrong track.