Programming language experts told Andrew Kelley, the creator of the Zig
    programming language, that having code which could run at compile time was a
    really dumb idea. But he went ahead and implemented it anyway. Years later,
    this has proven to be one of the killer features of Zig. In the Zig world, we
    call it comptime, from the keyword used to mark code required to run at compile
    time or variables to be known at compile time.
Which experts? The "comptime" is just macro-expansion from Scheme/Lisp which has been around for a long time. Aren't C++ templates also "code that runs at compile time"?
I wrote a Sudoku solver (my go-to "learn a new language" project) in Zig and the compile-time features were extremely useful.

entirely at compile-time, I was able to generate lookup tables for "given this cell index, give me the indexes of the cells in its row, column, and box". and I simply looped over that for each board size I wanted to support, from the standard 9x9 all the way up to 64x64.

another useful feature was support for arbitrary-sized ints [0] and packed structs [1]. for a 9x9 board, a cell can be represented with a 9-bit bitfield for possible values; a uint4 for the actual value; and a single bit indicating whether the value is known. Zig will happily pack that into a struct that occupies only 16 bits, and handle all the bit shifting & masking for you when you access the struct fields.

this meant I was able to represent the state of a 9x9 board in just 162 bytes - a flat array of those packed structs. and the exact same code could operate on the ~28kb of state needed for 64x64.



- dealing with C level abstractions in a very ergonomic way

- ability to import and use C libraries, no FFI or custom bindings required

- clear & logical distinctions between different pointer types and arrays

- allocators as a first class concept. this makes zig really safe because you can pass in test allocators that will report leaks etc.

If you enjoy hot takes and ancient memes, I made a talk about zig a couple of months ago where I lay out why I think it's The Chosen One to succeed C.

"In which people who have never used a language with compile-time expressions try Zig and think it's novel"

Not knocking Zig, I think it's swell, but it wasn't near the first language with this feature. D comes to mind, and C++ has it now with "constexpr" and "consteval".

> Programming language experts told Andrew Kelley, the creator of the Zig programming language, that having code which could run at compile time was a really dumb idea.

Really? I can't believe that! Running code at compile time is as old as Lisp! And it is present in some form or other in some other popular programming languages too. Like constexpr and templates in C++.

Zig tutorials and learning materials and on-boarding experience needs a lot of work. Compare with something like Kotlin, so well designed is its introductory material that one could start doing useful things with Kotlin within a few hours.

Also I find Zig's choice to use abbreviated keywords rather cryptic, for example using 'fn' instead of 'function' only hurts readability I think.

Here is at least one thing that Andrew said:

> There are a lot of people in this thread saying that I said stuff I didn't say. It's maddening.


Compile time code execution is implemented in a really neat way in Zig enabling anything from generics to meta programming.
Saying your product were first to do X, when by all measures it wasn't first to do X, is the oldest play in the book of marketing.
The whole "Zig doesn't have warnings" thing didn't go over well last month.

I bought into the focus on the build toolchain, until I found out my old Mac wasn't supported.

I'd be perfectly happy if the answer to TFA's question was "nothing". Zig has the minimalism of something like C89 or Lua. It has a syntax like C. It has compile time execution (as per TFA) like all sorts of languages mentioned in the comments. Etc. There isn't really anything particularly unique about it. Who cares? It feels to me like zig is an example of something that is greater than the sum of its parts.

Really the only thing I don't think I've ever seen before is how zig passes around memory allocators. And that's probably because I'm not a systems programmer by trade, so I'm less familiar with that sort of thing.

> Zig will compile different variants of maximum for each case, where maximum is called with a different set of argument types

How does Zig handle the monomorphization problem for generics in library APIs? In other words, give the `maximum` function from the blog post, if I want to distribute a binary but make that function available to clients of my binary to call with different types, what does Zig do?

It's lacklustre unicode string support. Pretty unique for a modern language!
The "Code run at compile time" feature has been around since ages.

Haskell templates are in haskell, the "compile time and run time language are the same" feature is not unique either.

Terra has had this for a long time. Anything you wrote in Terra could be invoked at compile time just the same using LLVM's JIT stuff. Made writing game tooling a breeze.
I think Zig is pretty cool. I have played with it a bit. My main complaint I think is that the toolchain is pretty hard to get working right in windows, though its a breeze on linux. The only reason I would care about windows, is that I think it looks like a great language for game development since its fast, pretty easy to write, and interops with C/C++ pretty easily.
A sufficiently expressive (and safe) type system is Turing complete, and many languages have that. They don’t all feel ergonomic, but Haskell and Rust are pretty good. If you’re into it, you should check out Idris (lang) and Idris2. Both very cool languages that support dependent types (types that depend on values).
It's the only way to cross-compile C and C++ code without losing hair follicles
So how are loops in compile time avoided? Is there a timeout in the compiler, or N number of step/instructions before bailing out?
I can't speak to the quality of Zig's compile-time compilation, other than to comment and say the article is taking liberties with its use of the word unique. As other commenter's have noted, many languages have the feature. Instead I'm going to talk about a bunch of cool and weird compile-time code implemented by people much smarter than I.

If anyone reading this is interested in compile-time computation, you owe it to yourself to read Paul Graham (of HN)'s own work on compile time programming, "On Lisp"[0], or to take compile-time ideas to the next level, Doug Hoyte's "Let Over Lambda"[1] (LOL). Lisp languages have a long and interesting history of compile-time computation via their various macro systems, and given the first-class inclusion in most Lisps, a greater variety of ideas have been explored regarding compile-time computation.

A few interesting examples:

LOL's "Pandoric macro"[2] lets you monkey-patch closure values. It's absolutely bonkers and would almost certainly be pathological to introduce to a codebase, but it's an example of pushing the boundaries of what's possible.

Common Lisp's object system, implemented via macro[3]. To be honest, the entire Common Lisp language is a masterclass when it comes to learning about macros (I can't avoid mentioning Scheme's hygeinic macro system and Lisp-1/Lisp-2.)

A special non-Lisp shout-out goes to Rust's Diesel library for embedding SQL DDL queries into macros[4] which is not something I've personally seen before.

Clojure has a few interesting (and practical) macro libs, particularly core.async, which is an implementation of CSP (similar to Golang's channels AFAIK), it embeds perfectly into the existing language and extends its capabilities even though it's merely a library. Another interesting lib which comes to mind is Meander[5], which uses term-rewriting to provide transparent data transformations. Think declaratively interacting with data structures at compile time, and the library figuring out the best way of turning it into imperative value manipulation code.

[0] [1] [2] [3] [4] [5]