Smalltalk is powerful because all Smalltalk data are programs–all information is embodied by running, living objects. Class programming in Smalltalk is simply data manipulating programs that are themselves data–it’s the inverse of the Lisp philosophy, but the end result is the same. It’s what enables the Smalltalk debugger to freeze, dissect, modify, and resume programs mid-execution. It’s what enables the browser to instantly find all objects that respond to a given message, or all superclasses and subclasses of a given object, or every running instance of a given class. It’s why the Smalltalk IDE isn’t just written in the language, it quite literally is the language.

But files aren’t “Web Scale”!

Is that really true? And do you really care? Should you really care?

The answer to all of these questions is “No”. Files can easily be “web scale”. As of 2013, Hacker News is still running as a single process, on a single core, of a single server, backed by a directory structure of simple data files. Nearly 2 million page views are served daily.

Oddly enough, it is much easier to explain Lisp macros to those who have experience with assembly language macros in such “antiques" as the DEC PDP-10 or IBM 360/370 mainframes which supported looping, deconstruction/construction of symbol names (down to individual characters), definition of new macros by macros, access to the assembler’s symbol table at compile time (accessing & mutating the values of symbols and tags), "pass1” vs. “pass2” conditionals (collect data from the whole program in pass 1 and drop it into instruction and/or data locations in pass 2), etc., etc.

But for those whose first or only experience of “macros” came from
the crippled incarnation of them in C
, well, you are quite correct,

The ironic poetry of The Grand Budapest Hotel

I saw the movie when it came out, and then for a second time last month, with my parents. If you’ve seen it, you might have found yourself wondering, “what’s up with those poem fragments? are they real?” Well, wonder no further, and thank the random person on the internet who collected them1 (and perhaps also (cough!) your humble curator who stumbled across them).

P.S. It turns out they were all written by Wes Anderson himself

P.P.S. Yes, “Boy with Apple” is obviously not real either

That absolutely terrifies the herd-following, lockstep-marching, mainstream-saluting cowards who obediently dash out to scoop up books on The Latest Thing. They learn and use atrocities like Java, C++, XML, and even Python for the security it gives them and then sit there slaving away miserably, tediously, joylessly paying off mortgages and supporting ungrateful teenagers who despise them, only to look out the double-sealed thermo-pane windows of their central-heated, sound-proofed, dead-bolted, suffocating little nests into the howling gale thinking “what do they know that I do not know?” when they see us under a lean-to hunched over our laptops to shield them from the rain laughing our asses off as we write great code between bong hits …. what was the question?


Another other big difference is that Lisp and Haskell gurus have different programming philosophies, and the difference doesn’t get articulated very often in a way that newcomers will grasp. Lisp gurus stress metaprogramming of a very syntactic flavor (sexps, and the big emphasis on interpreters and macros), while Haskell gurus stress algebraic laws and denotational semantics. The crown jewels of Lisp hacking tend to be sexp-based EDSLs that get macro-expanded into efficient code; in Haskell the crown jewels are denotational EDSLs built around opaque combinators stated in terms of the semantics, and equational rewrite rules for turning them into efficient code. Put very coarsely, in Lisp you metaprogram with expressions, in Haskell you metaprogram with meanings. (This is exaggerated because both styles exist in both communities—it’s a matter of emphasis.)

Success in computation depends partly on the proper
choice of a formula and partly on a neat and methological
arrangement of the work. For the latter computing paper is essential. A convenient size for such a paper is 26” by 16”; this should be devided by faint ruling into 1/4” squares… Every computation should be performed with ink in preference to pencil; this not only ensures a much more lasting record of the work but also prevents eye-strain and fatigue.

D.Gibb, “A course in interpolation and numerical integration”, 1915

Media Diet

I’ve always been a media glutton, over-dosing on blogs, movies, videos, and such. As of a few weeks ago, I sort of quit cold turkey1. No news, no blogs, nothing. Yes, even the New York Times, which is the one thing I read daily, is no longer a real habit2.

A good side effect of this is that while I read less stuff, I read it more thoroughly and I have more fun reading it.

So here’s a plan: I’ll post a list of ten things I read every month (yes, the things that are read are so few I can count them on the fingers of both hands! No? Not funny? Ok).

  1. Almost, I try to limit myself to a bit of Reddit now and then. 
  2. Occasionally browsing it for recommended or staff-picked reader comments (highly under-rated, IMHO, and usually better than the Op-Ed pieces themselves). 

Kay and Ingalls et al. had a very specific vision for personal computing as liberating, and the keys to that vision were full access, full comprehensibility and making no distinction between users and programmers. The market has clearly shown that people would rather not have full access, don’t care about comprehending the system and desperately want a firmly regimented distinction between users and programmers.

When people who can’t think logically design large systems, those systems become incomprehensible. And we start thinking of them as biological systems. And since biological systems are too complex to understand, it seems perfectly natural that computer programs should be too complex to understand.

We should not accept this. That means all of us, computer professionals as well as those of us who just use computers. If we don’t, then the future of computing will belong to biology, not logic. We will continue having to use computer programs that we don’t understand, and trying to coax them to do what we want. Instead of a sensible world of computing, we will live in a world of homeopathy and faith healing.