Came across this pros/cons table recently (some un-named company, when deciding on micro-service language-of-choice):
Mostly agree, except that
We do have generics in Go (as of 1.18) and “… unfamiliar with the language” is true for any language
The Python cons will remain as they are (though they should matter much less when not doing cpu-intensive tasks)
The Rust cons will reduce as (1) it becomes less ‘new‘, and (2) the ecosystem becomes more ‘strong‘, and (3) async/awaithas been standardized now, but yes, the memory model will always take time to learn.
Given this, and admitting a lack of Kotlin knowledge, IMO Go seems like a safe default choice (or Rust/Python where appropriate).
Clear, straightforward chapters discuss a broad range of questions using principles of computer science, such as why we should teach students to code and is coding a science, engineering, technology, mathematics, or language?
(content here copy-pasted from this HN thread, because I thought it was a good summary)
What is Urbit?
Urbit is a virtual machine OS for server-side applications. If you imagine a future in which it is common for non-technical people to rent cloud server space on which to host server-side applications (say, a small blog, a mastodon node, a minecraft server, etc), Urbit aspires to be a good platform on which to host them.
All input events (http request to an urbit-based api, signed message from another urbit, keystroke from console, etc) are transactions which change the OS state (or don’t, if they fail). As a result, it should be impossible for a transaction to fail halfway through and leave the urbit instance hosed.
Exactly-once messaging between nodes. This is possible because nodes have persistent connections; disconnection is indistinguishable from long latency. This may sound minor, but it is a huge part of what makes urbit novel and (theoretically) stable and secure.
Built-in identity and auth. An urbit instance can’t boot without an identity, which serves as username, network-routing address, and also as the public key with which all outgoing messages are encrypted. In practical terms, this means no urbit app or service needs to deal with logins or passwords or crypto.
There are only 2^32 first-class identities, which makes urbit a de facto reputation network. This is to minimize malicious behavior; if an identity costs $5, and you can only make $2 from spamming before that identity is blacklisted, no one will spam.
The urbit network is hierachically federated, and hence resistant to censorship. (Of course, this is a misfeature if you want to be able to censor people off of the networks you participate in)
It’s not there yet, but the urbit kernel aspires to be so small and simple and formally-provably-correct that at some point it’s done. As in, done done – no features to add, no bugs to fix, done. A lot of the design decisions (some of which are wildly unperformant) make no sense unless you take this goal in to account. More on that here: https://urbit.org/blog/toward-a-frozen-operating-system
I generally preferred the MIT license. I actually made fun of the “copyleft” GPL licenses, on the grounds that they are less free.
Yes, so far so good.
… once I started using Linux as my daily driver, however, it took a while still for the importance of free software to set in. But this realization is inevitable, for a programmer immersed in Linux. It radically changes your perspective when all of the software you use guarantees these four freedoms. If I’m curious about how something works, I can usually be reading the code within a few seconds. I can find the author’s name and email in the git blame and shoot them some questions. And when I find a bug, I can fix it and send them a patch.
Okay, sure, don’t disagree. But then, the lede:
These days, on the rare occasion that I run into some proprietary software, this all grinds to a halt. It’s like miscounting the number of steps on your staircase in the dark. These moments drive the truth home: Free software is good. It’s starkly better than the alternative. And copyleft defends it.
Making the point clearer:
I’ve learned that the effort I sink into my work far outstrips the effort required to reuse my work. The collective effort of the free software community amounts to tens of millions of hours of work, which you can download at touch of a button, for free. If the people with their fingers on that button held these same ideals, we wouldn’t need the GPL.
The case being that we need the GPL because of human nature … or at least human society as it exists right now.
The GPL is the legal embodiment of this Golden Rule: in exchange for benefiting from my hard work, you just have to extend me the same courtesy. Its the unfortunate acknowledgement that we’ve created a society that incentivises people to forget the Golden Rule. I give people free software because I want them to reciprocate with the same. That’s really all the GPL does. Its restrictions just protect the four freedoms in derivative works. Anyone who can’t agree to this is looking to exploit your work for their gain – and definitely not yours.
As someone who’s oscillated between GPL & FSF and MIT/BSD, this is definitely something to chew on.
I gotta say, as a programmer “of a certain age”, a lot of recent advances have an everything-old-is-new-again feel to me. When I first heard of “threads” in the 90s, the term specifically referred to a mechanism for intra-process concurrency that was “lightweight”, i.e. it didn’t have the heavy overhead of forking a new OS process. IIRC they weren’t even preemptively multitasked. Then preemptive “OS threads” became the new hotness and everything using cooperative threads was old and busted. Now a generation has passed and we have “virtual threads” or things like asyncio to solve the problems created by the thing that solved the problems of the thing before.
So now we’re in the position where it’s perfectly normal to run a program with several virtual threads running in a “real” thread in a Java virtual machine running inside a Docker container running on a VM instance in a hypervisor on some giant box somewhere. And if we’re all living in a simulation, then its’ probably turtles all the way down.
I still remember a wonderful presentation by Damian Conway a number of years ago about all of the great ways Perl 6 could turn into whatever domain specific language you needed it to be. It was beautiful, I was awestruck. I’ve always enjoyed Perl as a language. But I walked out of that presentation thinking “That was so beautiful, and I don’t want it anywhere near my business.” Because the last thing I need is software written in a language I can’t hire anyone else to maintain.
At some level that’s what I think has happened to Perl in general. I never liked Python much, until I got forced to use it on a new team. Now I’m convinced that it has a really distinct advantage — there aren’t too many ways to write Python, so an experienced Python developer can figure out code pretty quickly. But Perl programs are frequently art pieces that take a lot of effort to truly grok.
From the foreword to “The Unix Haters Handbook” by Donald Norman, author of previous works like “The Trouble with Unix: The User Interface is Horrid“, and “The Design of Everyday Things“, and a fellow at Apple and IDEO.
A surprising find, in the comments section of a Youtube video, Alan Kay clarifying, in response to “he doesn’t advocate using Smalltalk today, but didn’t say what he would recommend“
Smalltalk was “the subset we could fit on the Xerox Parc computers” of ideas about designing and making systems, partly catalyzed by Sketchpad, Simula, a few of the late 60s operating systems, Lisp, and the ARPAnet.
There were many things we thought would be good to do that we didn’t do. Many of these could be done today after 50 more years of Moore’s Law.
The Smalltalk system is an extensible system made from a few simple fundamental ideas about form and transaction plus a library of definitions at every level (we got this deep idea from Lisp, especially Lisp 1.85 at BBN).
This means that a lot could be accomplished just by completely redesigning and rewriting the library without having to do a lot with the kernel.
The form part could be improved a little, but it can handle much of the 50 years later.
The kernel part could be much improved by doing some of the things we chose not to tackle. For example, the kind of “OOP” we thought was OOP back then was basically a module scheme made from dynamic virtual machines/servers as though on a network with messages as requests rather than as commands.
What we did was enough to be able to pretend that we had the real thing and be able to get away with the pretensions. For example, [the] way we used Lisp pointers and sharing is anti-module, and makes it difficult for the interior of an object to really be encapsulated, etc. This could be pretty easily redesigned and fixed, to make really good modules with much more controlled dependencies on the exterior environment.
We knew that we didn’t really want to send messages to objects — this doesn’t scale well — we really wanted to just receive. And we especially liked Gerlernter’s LINDA scheme as a gesture to a style to do publish and subscribe in a really nice way.
But because Smalltalk was extremely powerful and expressive in its day, and the Parc computers were very small, we were able to do pretty much everything from bottom to top in about 10,000 lines of code, and what was not complete in Smalltalk did not get in the way, whereas what was good about Smalltalk made it possible to quickly turn careful designs into real-time working systems.
And there are many more avenues — for example for development — that should be in any system done today.