As kryps pointed out on reddit, I believe at some point there was a change to add/improve compilation times by making more effective use of parallelism. So forcing the build to use a single thread produces more sobering results, but still a huge win:
$ git clone https://github.com/BurntSushi/ripgrep $ cd ripgrep $ git checkout 0.8.0 $ time cargo +1.20.0 build --release real 34.367 user 1:07.36 sys 1.568 maxmem 520 MB faults 1575 $ time cargo +1.67.0 build --release [... snip sooooo many warnings, lol ...] real 7.761 user 1:32.29 sys 4.489 maxmem 609 MB faults 7503
(My CPU is a i9-12900K.)
$ time cargo +1.20.0 build -j1 --release real 1:03.11 user 1:01.90 sys 1.156 maxmem 518 MB faults 0 $ time cargo +1.67.0 build -j1 --release real 46.112 user 44.259 sys 1.930 maxmem 344 MB faults 0
These are from-scratch release builds, which probably matter less than incremental builds. But they still matter. This is just one barometer of many.
For example Java has a perfectly serviceable TLS stack written entirely in a memory safe language. Although you could try to make OpenSSL memory safe by rewriting it in Rust - which realistically means yet another fork not many people use - you could also do the same thing by implementing the OpenSSL API on top of JSSE and Bouncy Castle. The GraalVM native image project allows you to export Java symbols as C APIs and to compile libraries to standalone native code, so this is technically feasible now.
There's also some other approaches. GraalVM can also run many C/C++ programs in a way that makes them automatically memory safe, by JIT compiling LLVM bitcode and replacing allocation/free calls with garbage collected allocations. Pointer dereferences are also replaced with safe member accesses. It works as long as the C is fairly strictly C compliant and doesn't rely on undefined behavior. This functionality is unfortunately an enterprise feature but the core LLVM execution engine is open source, so if you're at the level of major upgrades to Rust you could also reimplement the memory safety aspect on top of the open source code. Then again you can compile the result down to a shared native library that doesn't rely on any external JVM.
Don't get me wrong, I'm not saying don't improve Rust compile times. Faster Rust compiles would be great. I'm just pointing out that, well, it's not the only memory safe language in the world, and actually using a GC isn't a major problem these days for many real world tasks that are still done with C.
I don't understand this. The vast majority (I would guess 95%+) of people using Rust have CPUs with AVX2 or NEON. Why is that a good reason? Why can't there be a fast path and slow path as a failover?
Without that you can't really do distributed and cached compilation 100% reliably.
I wish there were some efforts at dramatically different approaches like this because there’s all this work going into compilation but it’s unlikely to make the development cycle twice as fast in most cases.
I can see how someone would come to rust, type `cargo run`, wait 3-5 minutes while cargo downloads all the dependencies and compiles them along with the main package, and then say, "well that took awhile it kinda sucks". But if they change a few lines in the actual project and compile again it would be near instant.
The fair comparison would be something akin to deleting your node or go modules and running a cold build. I am slightly suspicious, not in a deliberate foul play way but more in a messy semantics and ad-hoc anecdotes way, that many of these compile time discrepancies probably boil down more to differences in how the cargo tooling handles dependencies and what it decides to include in the compile phase, where it decides to store caches and what that means for `clean`, etc. compared to similar package management tooling from other languages, than it does to "rustc is slow". But I could be wrong.
Package management, one of Rust’s biggest strengths, is one of its biggest weaknesses here. It’s so easy to pull in another crate to do almost anything you want. How many of them are well-written, optimized, trustworthy, etc.? My guess is, not that many. That leads to applications that use them being bloated and inefficient. Hopefully, as the ecosystem matures, people will pay better attention to this.
The extract function tool is very buggy. As I spend a lot of time refactoring, maybe putting time in those tools would have a better ROI than so much work into making the compiler faster.
Or a mode where it compile automatically every time you change a line? (With absolutely no optimization like inlining stuff etc to make it fast) kind of like just compiling the new line from Rust to its ASM equivalent and adding that to the rest of the compiles code. Like a big fatjar type of way, if that make sense.
If the concern is that I could change something in a crate, then could a checksum be created on the first compilation, then checked on future compilations, and if it matches then the crate doesn't need to be recompiled.
The Rust dev team can't win!