On Urbit

(content here copy-pasted from this HN thread, because I thought it was a good summary)

What is Urbit?

Urbit is a virtual machine OS for server-side applications. If you imagine a future in which it is common for non-technical people to rent cloud server space on which to host server-side applications (say, a small blog, a mastodon node, a minecraft server, etc), Urbit aspires to be a good platform on which to host them.

Urbit features:

  1. All input events (http request to an urbit-based api, signed message from another urbit, keystroke from console, etc) are transactions which change the OS state (or don’t, if they fail). As a result, it should be impossible for a transaction to fail halfway through and leave the urbit instance hosed.
  2. Exactly-once messaging between nodes. This is possible because nodes have persistent connections; disconnection is indistinguishable from long latency. This may sound minor, but it is a huge part of what makes urbit novel and (theoretically) stable and secure.
  3. Built-in identity and auth. An urbit instance can’t boot without an identity, which serves as username, network-routing address, and also as the public key with which all outgoing messages are encrypted. In practical terms, this means no urbit app or service needs to deal with logins or passwords or crypto.
  4. There are only 2^32 first-class identities, which makes urbit a de facto reputation network. This is to minimize malicious behavior; if an identity costs $5, and you can only make $2 from spamming before that identity is blacklisted, no one will spam.
  5. The urbit network is hierachically federated, and hence resistant to censorship. (Of course, this is a misfeature if you want to be able to censor people off of the networks you participate in)
  6. It’s not there yet, but the urbit kernel aspires to be so small and simple and formally-provably-correct that at some point it’s done. As in, done done – no features to add, no bugs to fix, done. A lot of the design decisions (some of which are wildly unperformant) make no sense unless you take this goal in to account. More on that here: https://urbit.org/blog/toward-a-frozen-operating-system

A painting

Came across this painting through Twitter.

Something about it appealed to me.

The colors and shape of the arches reminded me of Bruegel’s Tower of Babel.

The title of the painting is “Das Begrabnis Eines Kreuzritters“, by Franz Ludwig Catel.

Or, translated into English, “The Burial of a Crusader“.

Costs of war

Amid the recent spate of numbers being thrown around (a few hundred million dollars for these weapons, forty billion dollars for some more stuff), I was casting around for “how to put these numbers into perspective“.

I found this great article, from the Project on Government Oversight, that covered a lot of bases, and led me to look for more.

This is the first resource, from the Pentagon itself.

This is pretty terrible in itself; if you multiply the two numbers on the bottom line, the DoD estimate comes to $1,497,006,000,000, which is a large amount of money.

This is the second resource, which indicates that the DoD estimate is about a third of the total cost.

Additional factors include the interest on the money borrowed, and veteran care, and bring it to 6.4 trillion over 19 years.

Even if we take the “lower” estimate of $5.4 trillion, it’s still hard to viscerally make sense of it.

One way is to divide it by the number of days: 19 * 365 = 6935.

$5.4T / 6935d = $778.659M/d

Let’s be generous and round down.

The “burn rate” comes to $750 Million dollars, every single day.

Now that’s a lot of money.

As a fun (well, darkly fun) aside, you probably can’t even burn money that fast.

  • Caveat #1: literally burning money is illegal, don’t try this.
  • Caveat #2: there is a way of destroying large amounts of cash, but only the Fed gets to do it, by shredding it (Again, not as fast as $750M a day)

This does make for a good unit of comparison (“DWOTS”, or “Daily War On Terror Spend“).

Here are some things worth “1 DWOTS”:

Resisting the virtual life

A book from the 1990s on “resisting the virtual life”.

An endorsement of sorts:

“At last, a defiant radical critique of the information millenium. . .. A burning barricade across the highway to the total surveillance society.”

A review from the turn of the millenium gleefully putting the book down as a party-pooper.

But this is the best (IMO) part: an article about it from two years further on, written (twenty years ago!) in 2002 (still in publication), excerpts below, with my remarks in parentheses.

No one can deny that our lives have been changed in just a few, short years. Only seven years after this book was published, the Internet has become commonplace in industrialized countries, and is making inroads into developing countries as well

(This is almost cute in its naïveté … “our lives” were going to change far, far more)

This book is an interesting snapshot of the way people thought in 1995. Some of what the authors discuss and predict has come true, and some has not. 

(and these articles are interesting snapshots of how people thought they were “done changing” back then, that the “impact of the internet had been absorbed”, and so on)

Technologies engender new values, and lead to shifts in existing value systems, causing instability and a risk of societal implosion. The oft-cited example of the Luddites, English weavers who destroyed the machines that would replace them, is used as a metaphor for those who question these new values. But the Luddites acted out of corporatist, economic fears – they saw a technology that was going to cut them out of the system of production, and eliminate their gainful employment. Today’s Luddites are different – they try to raise awareness of the hypocrisy and complications that may arise from these new technologies.  

(twenty years later, “today’s luddites” would be right once again to worry about being “cut out of the system of production”)

… sometimes, the authors are way off the mark. Herbert I. Schiller equates the NII with a system designed for “none other than transnational corporations.” But, while the Internet has become a marketplace, at least in part, its greatest influence has been on individuals. E-mail remains the killer app of the Internet, peer-to-peer has usurped traditional distribution models, and instant messaging (and its cell-phone sibling, SMS) have surprised even those companies who have developed these applications. 

(Written before e-mail had centralized providers, messaging had centralized providers, and the quality of the “marketplace” is less of a charming bazaar and more of … something else)

Well, so what?

If nothing else, it shows how cyclical these trends can be, and how it can take time, sometimes a good deal of time, before the full implications of a given technological change are known.

License to Share

Yes, tacky. Her Majesty’s GPL-enforcer.

I came across Drew DeVault’s essay on source code licenses, from a few years ago, and agreed with the first few lines:

I agreed with the first few lines of this essay:

I generally preferred the MIT license. I actually made fun of the “copyleft” GPL licenses, on the grounds that they are less free

Yes, so far so good.

… once I started using Linux as my daily driver, however, it took a while still for the importance of free software to set in. But this realization is inevitable, for a programmer immersed in Linux. It radically changes your perspective when all of the software you use guarantees these four freedoms. If I’m curious about how something works, I can usually be reading the code within a few seconds. I can find the author’s name and email in the git blame and shoot them some questions. And when I find a bug, I can fix it and send them a patch.

Okay, sure, don’t disagree. But then, the lede:

These days, on the rare occasion that I run into some proprietary software, this all grinds to a halt. It’s like miscounting the number of steps on your staircase in the dark. These moments drive the truth home: Free software is good. It’s starkly better than the alternative. And copyleft defends it.

Making the point clearer:

I’ve learned that the effort I sink into my work far outstrips the effort required to reuse my work. The collective effort of the free software community amounts to tens of millions of hours of work, which you can download at touch of a button, for free. If the people with their fingers on that button held these same ideals, we wouldn’t need the GPL.

The case being that we need the GPL because of human nature … or at least human society as it exists right now.

The GPL is the legal embodiment of this Golden Rule: in exchange for benefiting from my hard work, you just have to extend me the same courtesy. Its the unfortunate acknowledgement that we’ve created a society that incentivises people to forget the Golden Rule. I give people free software because I want them to reciprocate with the same. That’s really all the GPL does. Its restrictions just protect the four freedoms in derivative works. Anyone who can’t agree to this is looking to exploit your work for their gain – and definitely not yours.

As someone who’s oscillated between GPL & FSF and MIT/BSD, this is definitely something to chew on.

Lispworks from SublimeText

Create a “build script” to dump a headless image:

(in-package "CL-USER")
(save-image "~/lw-console"
                        :console t
                        :environment nil
                        :multiprocessing t)

Run it

~ /Applications/LispWorks\ 8.0\ \(64-bit\)/LispWorks\ \(64-bit\).app/Contents/MacOS/lispworks-8-0-0-macos64-universal -build ~/tmp/resave.lisp
; Loading text file /Applications/LispWorks 8.0 (64-bit)/Library/lib/8-0-0-0/private-patches/load.lisp
LispWorks(R): The Common Lisp Programming Environment
Copyright (C) 1987-2021 LispWorks Ltd.  All rights reserved.
Version 8.0.0
Saved by LispWorks as lispworks-8-0-0-arm64-darwin, at 06 Dec 2021 17:56
User agambrahma on agams-mbp.lan
; Loading text file /Users/agambrahma/tmp/resave.lisp
;  Loading text file /Applications/LispWorks 8.0 (64-bit)/Library/lib/8-0-0-0/private-patches/load.lisp
Build saving image: /Users/agambrahma/lw-console
Build saved image: /Users/agambrahma/lw-console
Build split image: /Users/agambrahma/lw-console.lwheap
Build executable: /Users/agambrahma/lw-console

Install SublimeREPL and Slyblime

Modify settings

Within sly.sublime-settings, you should have something like:

  "inferior_lisp_process": {
    "command": ["/Users/agambrahma/lw-console"],
    "autoclose": true,
    "loading_time": 2,
    "setup_time": 1

(where command points to the executable generated above)

Kick start SLY

Run SLY: Start and connect to an inferior lisp instance

Enjoy REPLing !!

On writing and sharing

I found this prologue from a recent newsletter by Justin Murphy inspiring:

If you can figure out the truth, you should share it. You might help someone.

But you’re unlikely to figure out the truth because you want to help people.

You’re unlikely to figure out the truth because you want to “join a conversation.”

You’re unlikely to figure out the truth because you want to build an audience.

You’re only going to figure out truths if you’re passionate about knowing the truth, if you find exhilarating the work of trying to figure out the truth.

Mustering the discipline to write on a regular basis is a battle against yourself, against your own feeling that it doesn’t matter.

“Writing is a Single-Player Game”, from “Other Life”

There’s a very real kind of procrastination hinted at here, one that I can unfortunately relate to.

Cycles of threading

From this HN thread

I gotta say, as a programmer “of a certain age”, a lot of recent advances have an everything-old-is-new-again feel to me. When I first heard of “threads” in the 90s, the term specifically referred to a mechanism for intra-process concurrency that was “lightweight”, i.e. it didn’t have the heavy overhead of forking a new OS process. IIRC they weren’t even preemptively multitasked. Then preemptive “OS threads” became the new hotness and everything using cooperative threads was old and busted. Now a generation has passed and we have “virtual threads” or things like asyncio to solve the problems created by the thing that solved the problems of the thing before.

So now we’re in the position where it’s perfectly normal to run a program with several virtual threads running in a “real” thread in a Java virtual machine running inside a Docker container running on a VM instance in a hypervisor on some giant box somewhere. And if we’re all living in a simulation, then its’ probably turtles all the way down.

Makers of Pizza

I went to pick up a pizza yesterday, which we sometimes order from this place that’s about a ten-minute walk away.

As I was waiting to pick it up and pay for it, I was able to see how the pizza was made in the brick-lined wood-fired oven.

There was a very competent woman who was operating this one-person army of pizza making.

Taking the dough, rolling, then spinning it into that big flat shape. Then placing it on this big flat metal board with a long rod that was used to push it all the way in. And later taking it out; all this was happening extremely efficiently.

I had the sudden feeling like this is a person who is actually making something.

We have a lot of this glib talk about “makers” — but then frequently ignore people like this.

I paid for my pizza and left.