pornel
Another build time improvement coming, especially for fresh CI builds, is a new registry protocol. Instead of git-cloning metadata for 100,000+ packages, it can download only the data for your dependencies.

https://blog.rust-lang.org/inside-rust/2023/01/30/cargo-spar...

burntsushi
I originally posted this on reddit[1], but figured I'd share this here. I checked out ripgrep 0.8.0 and compiled it with both Rust 1.20 (from ~5.5 years ago) and Rust 1.67 (just released):

    $ git clone https://github.com/BurntSushi/ripgrep
    $ cd ripgrep
    $ git checkout 0.8.0
    $ time cargo +1.20.0 build --release
    real    34.367
    user    1:07.36
    sys     1.568
    maxmem  520 MB
    faults  1575
    
    $ time cargo +1.67.0 build --release
    [... snip sooooo many warnings, lol ...]
    real    7.761
    user    1:32.29
    sys     4.489
    maxmem  609 MB
    faults  7503
As kryps pointed out on reddit, I believe at some point there was a change to add/improve compilation times by making more effective use of parallelism. So forcing the build to use a single thread produces more sobering results, but still a huge win:

    $ time cargo +1.20.0 build -j1 --release
    real    1:03.11
    user    1:01.90
    sys     1.156
    maxmem  518 MB
    faults  0

    $ time cargo +1.67.0 build -j1 --release
    real    46.112
    user    44.259
    sys     1.930
    maxmem  344 MB
    faults  0
(My CPU is a i9-12900K.)

These are from-scratch release builds, which probably matter less than incremental builds. But they still matter. This is just one barometer of many.

[1]: https://old.reddit.com/r/rust/comments/10s5nkq/improving_rus...

fidgewidge
I wonder about the framing of the title here. Rust is great but realistically a lot of software with memory safety bugs doesn't need to be written in C in the first place.

For example Java has a perfectly serviceable TLS stack written entirely in a memory safe language. Although you could try to make OpenSSL memory safe by rewriting it in Rust - which realistically means yet another fork not many people use - you could also do the same thing by implementing the OpenSSL API on top of JSSE and Bouncy Castle. The GraalVM native image project allows you to export Java symbols as C APIs and to compile libraries to standalone native code, so this is technically feasible now.

There's also some other approaches. GraalVM can also run many C/C++ programs in a way that makes them automatically memory safe, by JIT compiling LLVM bitcode and replacing allocation/free calls with garbage collected allocations. Pointer dereferences are also replaced with safe member accesses. It works as long as the C is fairly strictly C compliant and doesn't rely on undefined behavior. This functionality is unfortunately an enterprise feature but the core LLVM execution engine is open source, so if you're at the level of major upgrades to Rust you could also reimplement the memory safety aspect on top of the open source code. Then again you can compile the result down to a shared native library that doesn't rely on any external JVM.

Don't get me wrong, I'm not saying don't improve Rust compile times. Faster Rust compiles would be great. I'm just pointing out that, well, it's not the only memory safe language in the world, and actually using a GC isn't a major problem these days for many real world tasks that are still done with C.

bufo
"There are possible improvements still to be made on bigger buffers for example, where we could make better use of SIMD, but at the moment rustc still targets baseline x86-64 CPUs (SSE2) so that's a work item left for the future."

I don't understand this. The vast majority (I would guess 95%+) of people using Rust have CPUs with AVX2 or NEON. Why is that a good reason? Why can't there be a fast path and slow path as a failover?

IshKebab
I really wish there was some work on hermetic compilation of crates. Ideally crates would be able to opt-in (eventually opt-out) to "pure" mode which would mean they can't use `build.rs`, proc macros are fully sandboxed, no `env!()` and so on.

Without that you can't really do distributed and cached compilation 100% reliably.

gregwebs
Haskell is one of the few languages that can compile slower than rust. But they have a REPL GHCI that can be used to fairly quickly reload code changes.

I wish there were some efforts at dramatically different approaches like this because there’s all this work going into compilation but it’s unlikely to make the development cycle twice as fast in most cases.

dcow
When people complain about rust compile times are they complaining about cold/clean compiles or warm/cached compiles? I can never really tell because people just gripe "compile times".

I can see how someone would come to rust, type `cargo run`, wait 3-5 minutes while cargo downloads all the dependencies and compiles them along with the main package, and then say, "well that took awhile it kinda sucks". But if they change a few lines in the actual project and compile again it would be near instant.

The fair comparison would be something akin to deleting your node or go modules and running a cold build. I am slightly suspicious, not in a deliberate foul play way but more in a messy semantics and ad-hoc anecdotes way, that many of these compile time discrepancies probably boil down more to differences in how the cargo tooling handles dependencies and what it decides to include in the compile phase, where it decides to store caches and what that means for `clean`, etc. compared to similar package management tooling from other languages, than it does to "rustc is slow". But I could be wrong.

errantmind
I write a lot of Rust code and, outside of performance optimization with Release builds, I've had next to no issues with iterative compile times, even on fairly large projects. Honestly it feels like a bit of a meme at this point. My CPU is also 5 years old (4 cores at 5ghz), so it isn't like I have a super beefy setup either.
lumb63
I love to see work being done to improve Rust compile times. It’s one of the biggest barriers to adoption today, IMO.

Package management, one of Rust’s biggest strengths, is one of its biggest weaknesses here. It’s so easy to pull in another crate to do almost anything you want. How many of them are well-written, optimized, trustworthy, etc.? My guess is, not that many. That leads to applications that use them being bloated and inefficient. Hopefully, as the ecosystem matures, people will pay better attention to this.

xiphias2
I see a lot of work going on making the compiler faster (which looks hard at this point), but I wish I just would be able to make correct changes without needing to recompile code at least.

The extract function tool is very buggy. As I spend a lot of time refactoring, maybe putting time in those tools would have a better ROI than so much work into making the compiler faster.

nazka
Maybe it’s a dumb idea but what about a mode where it won’t checks types and stuff? So we can just compile super fast when we are just tinkering around.

Or a mode where it compile automatically every time you change a line? (With absolutely no optimization like inlining stuff etc to make it fast) kind of like just compiling the new line from Rust to its ASM equivalent and adding that to the rest of the compiles code. Like a big fatjar type of way, if that make sense.

mcdonje
I don't know much about how the compiler works, so the answer here is probably that I should read a book, but can external crates from crates.io be precompiled? Or maybe compile my reference to a part of an external crate once and then it doesn't need to be done on future compilations?

If the concern is that I could change something in a crate, then could a checksum be created on the first compilation, then checked on future compilations, and if it matches then the crate doesn't need to be recompiled.

rurban
I don't understand these overhyping folks. Instead of fixing their over thousands of memory safety bugs on their tracker, they rather ignore it and go for more compiler performance. Well, why not, it's insanely slow. But then I wouldn't dare to their mention their memory safety story. Which would need a better compiler, not a faster one.
Bjartr
It's funny, any other post on HN about improvements to Rust I've seen are chock full of comments to the effect of "I guess that feature is nice, but when will they improve the compile times?" And now many of the replies to this post are "Faster compiles are nice, but when will they improve/implement important features?"

The Rust dev team can't win!

TheDesolate0
[dead]
memsfty
[flagged]
sr.ht