By a process of elimination, I've arrived at a conclusion that I should write Rust, or at least give it a rigorous try.
Let us say I want to write a "native" program.
This train of thought started with wanting to write a program, an xfdesktop replacement, that can serve as my desktop background, slowly meandering though a pastel game of life, or floating through a Mandelbrot set. But the specifics are not relevant, because I find myself on the same train when thinking of other native programs, CLI tools, etc that I want to write.
My weapon of choice is TypeScript, a sword mighty yet light to wield, cutting through problems like butter. TypeScript also compiles to JavaScript, so it runs everywhere. Or does it?
While I can jump through hoops to compile JavaScript into a binary, such wouldn't feel "solid". And the very point of writing a native program in the first place is to make it feel solid.
Maybe this is preconception on my part, maybe one day the
TypeScript -> JavaScript -> WASM -> binary
pipeline will be straightforward, or maybe it already is and I just am not aware.
That leaves me with the following options — C, C++, Go, Rust.
Technically, there are a lot more options, and I wrote a long section here about eliminating them piecewise, but after writing it I felt like it was just noise.
Of these, C++ is the easiest to eliminate. I once spent an entire year in the heaven of C++, walking around in a glorious daze of std::vector and RAII, before one day snapping out of it and realizing that I was just spawning complexity that is unrelated to the problem at hand. The experience was so vivid that I've never felt the urge to partake in C++ ever again.
So C, Go and Rust.
There are two dimensions at play here - "Simplicity" and memory management.
I put simplicity in quotes because there is a more I need to say on that word.
C is a simple language. This is fact I agree with and appreciate. It is the reason for C's endurance. If someone posts a patch or submits a PR to a codebase written in C, it is easier to review than any other mainstream language. There is no spooky at a distance.
This allows code to evolve line by line, across many casual contributors that might not be seeped in its lore, making a drive by bug fix or enhancement. Changes are local. Of course, it is possible to make global changes by redefining the behaviour of a often used function, but such a change cannot happen accidentally, and is easy to spot when reviewing. Patches are just that — evolutions of lines.
In contrast, Haskell is not a simple language. The non-simplicity is at play both in the language itself, as evidenced by its intimidating syntax, but also in the source code artifacts written in it. Changes are not localized, the entire Haskell program is one whole — a giant equation that will spit out the answer you want, unlike a C program which is asked to plod there step by step.
There is an apocryphal story about Euler in elementary school solving all the math problems that the teacher gave to the class in a jiffy, so the teacher tells him to sum up the numbers to a thousand to get him to stop pestering for more. The expectation was that Euler would go through the numbers "imperatively", like C, summing them up. Instead, what Euler did was discover the summation formula and solved it "declaratively" like Haskell, in one go, as an equation.
This is the tradeoff between simplicity and abstraction. At a high level of abstraction, things solve themselves as if by magic. But not everyone is Euler, I'm certainly not, and too high a level of abstraction just makes my head hurt.
A tradeoff means there is no one right or wrong answer. It depends on the circumstance. Personally, and for the type of applications I have worked on recently, I've found TypeScript to be a cosy promontory on Mount Abstraction; not too low, not too high. And after having lived here for a while, I don't feel that I'd like the view from any lower.
Enough on simplicity, let's visit the other dimension. Memory management.
The hearts of computers beat in the nanosecond ranges; 1 human heartbeat is a decade in CPU time. There's plenty of room at the bottom, as Feynman said.
Not only are the timescales differingly alien, but there is also huge differences in CPU execution vs memory access. To paraphrase Norvig's Latency numbers a programmer should know, if we imagine a computer that executes 1 CPU instruction every second, it would take it days to read from RAM.
C is not fast because it runs "native instructions", which in our adjusted timescale are seconds, it is fast because an expert C programmer can eliminate entire day-equivalents from the program's runtime by optimizing memory access.
Given these facts, it is enticing to build a narrative that goes like this:
But such a narrative would be false in more ways than one.
There is some truth there — Go indeed is an attempt at a C v2, but one of reverence. It retains the same "line level" simplicity of C. The impact of changes are local only, sometimes tediously so, but the result is a codebase that behaves predictably, and evolves predictably.
The main falsehood in that strawman narrative is the presumption that one necessarily needs manual memory management to be fast. That is just not true.
Memory management was indeed the sore sticking point, why Rust hadn't appealed to me earlier.
Pulling off a language with automated memory management that runs as fast as manually memory managed ones is not easy, but in can be done, and both Go and Haskell are proof.
There are two aspects of speed — practical, and absolute.
Practically speaking, Go is fast, or fast enough for any further speed not to be noticeable. We at Ente run Go on our servers, and it is ridiculous how little CPU and memory they use. Optimizing, say, SQL queries, or S3 object placement, by even a small delta will overshadow order of magnitude speed improvements in the Go code.
As another example, esbuild surprised the JavaScript ecosystem half a decade ago by demonstrating that tooling could be made orders of magnitude faster by writing it in Go, triggering a "rewrite everything in Go / Rust" frenzy that is still ongoing (TypeScript itself is being rewritten in Go!)
That's all right, one might say, but it wouldn't hurt to go even faster, right? The absolute speed aspect, that is.
Haskell proves that even in absolute terms, a smart compiler is all it takes. GHC is the closest thing I've seen to magic in programming language tooling. In an Advent of Code once, I'd write a Haskell solution in what seemed like a very high level of abstraction, closer to Category Theory than to Von Neumann machines, with no speed optimizations or other tuning. GHC would take that code, strip away all the layers of abstraction, and compile it down into a single static binary that would run in the same ballpark as the times the Rust people posted!
Lots of words. So what do we have:
One of the three has to give.
C | Go | TS | Rust | |
---|---|---|---|---|
Native | Y | Y | Y | |
Abstraction | Y | Y | ||
Managed Mem | Y | Y |
Since I want native code, there is a hole, and I can't think of a better alternative than Rust to fill it, so I am going to give it a try.
I've never written a line of Rust in my life. I have decent experience in all the other languages I talked of, so my beliefs, while possibly wrong, are founded in some empiricism. With Rust, it has all been heresay, and I'm excited to see what the reality will look like.