1098 Commits

Author SHA1 Message Date
bors 30837cb66d Auto merge of #155550 - zetanumbers:cache_insert_unique, r=oli-obk
Replace `ShardedHashMap` method `insert` with debug-checked `insert_unique`

Currently every use of `ShardedHashMap::insert` checks that it won't evict an old value due to unique key. I haven't found any issue related to that faulty condition, so I thought of replacing it with `ShardedHashMap::insert_unique` which doesn't check for this condition unless `debug_assertions` are enabled. This might improve the performance.

r? @petrochenkov
2026-04-22 22:28:22 +00:00
Daria Sukhonina fa4d2ad671 Clarify behavior of ShardedHashMap::insert_unique 2026-04-22 12:39:19 +03:00
Nicholas Nethercote 21cd762cd1 Rewrite FlatMapInPlace.
Replace the hacky macro with a generic function and a new
`FlatMapInPlaceVec` trait. More verbose but more readable and typical.

LLM disclosure: I asked Claude Code to critique this file and it
suggested the generic function + trait idea. I implemented the idea
entirely by hand.
2026-04-21 15:52:55 +10:00
Daria Sukhonina 93d0186957 Fix typo 2026-04-20 13:51:01 +03:00
Daria Sukhonina 72f25eea33 Replace ShardedHashMap method insert with debug-checked insert_unique 2026-04-20 13:22:22 +03:00
Stuart Cook 6951f83914 Rollup merge of #155335 - Mark-Simulacrum:bootstrap-bump, r=jieyouxu
Bump bootstrap to 1.96 beta

See https://forge.rust-lang.org/release/process.html#default-branch-bootstrap-update-tuesday

I think this will wind up needing another PR in a week or so when we pick up assert_matches getting destabilized in beta? But that seems like it can be split into its own PR.
2026-04-17 16:17:52 +10:00
Mark Rousskov a64c420c70 Bump stage0 to 1.96 beta 2026-04-16 19:30:56 -04:00
bors 32e4c0ecbd Auto merge of #155256 - cuviper:hashbrown-0.17, r=Mark-Simulacrum
compiler: update hashbrown to 0.17

See library's rust-lang/rust#155154 for the bug fixes this brings.
This PR also updates `indexmap` in the compiler as a direct dependent.
2026-04-15 11:01:30 +00:00
Josh Stone 6e618bea47 compiler: update hashbrown to 0.17
See library's rust-lang/rust#155154 for the bug fixes this brings.
This PR also updates `indexmap` in the compiler as a direct dependent.
2026-04-13 11:11:16 -07:00
malezjaa 39f7cdb8e6 update thin-vec 2026-04-08 21:09:07 +02:00
Nicholas Nethercote ff1795fe49 Remove Clone impl for StableHashingContext.
`HashStable::hash_stable` takes a `&mut Hcx`. In contrast,
`ToStableHashKey::to_stable_hash_key` takes a `&Hcx`. But there are
some places where the latter calls the former, and due to the mismatch a
`clone` call is required to get a mutable `StableHashingContext`.

This commit changes `to_stable_hash_key` to instead take a `&mut Hcx`.
This eliminates the mismatch, the need for the clones, and the need for
the `Clone` impls.
2026-04-03 15:34:33 +11:00
bors 009a6c1e8b Auto merge of #154308 - ShoyuVanilla:undo-fudge-iv, r=jieyouxu
Revert #151380 and #153869

cc https://rust-lang.zulipchat.com/#narrow/channel/474880-t-compiler.2Fbackports/topic/.23153869.3A.20beta-nominated/with/581306395

r? ghost
2026-04-02 00:09:59 +00:00
bors 48cc71ee88 Auto merge of #154638 - JonathanBrouwer:rollup-oLBZ6Tr, r=JonathanBrouwer
Rollup of 4 pull requests

Successful merges:

 - rust-lang/rust#150752 (Update libc to v0.2.183)
 - rust-lang/rust#152432 (add rustc option -Zpacked-stack)
 - rust-lang/rust#154634 (Use `Hcx`/`hcx` consistently for `StableHashingContext`.)
 - rust-lang/rust#154635 (./x run miri: default to edition 2021)
2026-03-31 20:16:05 +00:00
Jonathan Brouwer 8427445bde Rollup merge of #154419 - zetanumbers:take-first-group, r=nnethercote
Take first task group for further execution

Continuing from https://github.com/rust-lang/rust/pull/153768#discussion_r2938828363.

I thought that storing a first group of tasks for immediate execution instead of pushing and immediately poping it from rayon's local task queue in par_slice would avoid overwhelming work stealing potentially blocking the original thread. So I've implemented this change.

8 threads benchmarks:

<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>baseline~~9</b></th><td colspan="2"><b>new~take-first-group~1</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.1110s</td><td align="right">0.1086s</td><td align="right">💚  -2.13%</td></tr><tr><td>🟣 <b>hyper</b>:check:initial</td><td align="right">0.1314s</td><td align="right">0.1298s</td><td align="right">💚  -1.23%</td></tr><tr><td>🟣 <b>hyper</b>:check:unchanged</td><td align="right">0.0771s</td><td align="right">0.0755s</td><td align="right">💚  -2.14%</td></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">0.3787s</td><td align="right">0.3757s</td><td align="right"> -0.80%</td></tr><tr><td>🟣 <b>clap</b>:check:initial</td><td align="right">0.4680s</td><td align="right">0.4564s</td><td align="right">💚  -2.48%</td></tr><tr><td>🟣 <b>clap</b>:check:unchanged</td><td align="right">0.2337s</td><td align="right">0.2301s</td><td align="right">💚  -1.52%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">0.4321s</td><td align="right">0.4265s</td><td align="right">💚  -1.31%</td></tr><tr><td>🟣 <b>syn</b>:check:initial</td><td align="right">0.5586s</td><td align="right">0.5401s</td><td align="right">💚  -3.31%</td></tr><tr><td>🟣 <b>syn</b>:check:unchanged</td><td align="right">0.3434s</td><td align="right">0.3429s</td><td align="right"> -0.14%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.2755s</td><td align="right">0.2661s</td><td align="right">💚  -3.40%</td></tr><tr><td>🟣 <b>regex</b>:check:initial</td><td align="right">0.3350s</td><td align="right">0.3347s</td><td align="right"> -0.11%</td></tr><tr><td>🟣 <b>regex</b>:check:unchanged</td><td align="right">0.1851s</td><td align="right">0.1832s</td><td align="right">💚  -1.01%</td></tr><tr><td>Total</td><td align="right">3.5296s</td><td align="right">3.4695s</td><td align="right">💚  -1.70%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9837s</td><td align="right">💚  -1.63%</td></tr></table>
2026-03-31 13:58:38 +02:00
Nicholas Nethercote ccc3c01162 Use Hcx/hcx consistently for StableHashingContext.
The `HashStable` and `ToStableHashKey` traits both have a type parameter
that is sometimes called `CTX` and sometimes called `HCX`. (In practice
this type parameter is always instantiated as `StableHashingContext`.)
Similarly, variables with these types are sometimes called `ctx` and
sometimes called `hcx`. This inconsistency has bugged me for some time.

The `HCX`/`hcx` form is more informative (the `H`/`h` indicates what
type of context it is) and it matches other cases like `tcx`, `dcx`,
`icx`.

Also, RFC 430 says that type parameters should have names that are
"concise UpperCamelCase, usually single uppercase letter: T". In this
case `H` feels insufficient, and `Hcx` feels better.

Therefore, this commit changes the code to use `Hcx`/`hcx` everywhere.
2026-03-31 20:16:57 +11:00
Ralf Jung d289cc4a38 Rollup merge of #154029 - xtqqczze:clear, r=joboet
Replace `truncate(0)` with `clear()`
2026-03-28 13:15:50 +01:00
Daria Sukhonina 576a7277cb Take first task group for further execution 2026-03-27 13:38:31 +03:00
Pavel Grigorenko ba1e06a722 core::mem::Alignemnt: rename as_nonzero to as_nonzero_usize 2026-03-25 16:18:33 +03:00
Pavel Grigorenko f8d658ab81 core: move Alignment from ptr to mem 2026-03-25 16:18:33 +03:00
Jonathan Brouwer 65097900fd Rollup merge of #152710 - Zalathar:unalign-fingerprint, r=fmease
Unalign `PackedFingerprint` on all hosts, not just x86 and x86-64

Back in https://github.com/rust-lang/rust/pull/78646, `DepNode` was modified to store an unaligned `PackedFingerprint` instead of an 8-byte-aligned `Fingerprint`. That reduced the size of DepNode from 24 bytes to 17 bytes (nowadays 18 bytes), resulting in considerable memory savings in incremental builds.

See https://github.com/rust-lang/rust/pull/152695#issuecomment-3907091509 for a benchmark demonstrating the impact of *removing* that optimization.

At the time (and today), the unaligning was only performed on x86 and x86-64 hosts, because those CPUs are known to generally have low overhead for unaligned memory accesses. Hosts with other CPU architectures would continue to use an 8-byte-aligned fingerprint and a 24-byte DepNode.

Given the subsequent rise of aarch64 (especially on macOS) and other architectures, it's a shame that some commonly-used builds of rustc don't get those memory-size benefits, based on a decision made several years ago under different circumstances.

We don't have benchmarks to show the actual effect of unaligning DepNode fingerprints on various non-x86 hosts, but it seems very likely to be a good idea on Apple chips, and I have no particular reason to think that it will be catastrophically bad on other hosts. And we don't typically perform this kind of speculative pessimization in other parts of the compiler.
2026-03-24 18:14:14 +01:00
Shoyu Vanilla d5bbeb978e Revert "Auto merge of #151380 - ShoyuVanilla:shallow-resolve-to-root-var, r=lcnr"
This reverts commit 75b9d89c68, reversing
changes made to 7bee525095.
2026-03-24 22:31:37 +09:00
Zalathar fd3b22a836 Use a safe BucketIndex abstraction in VecCache
The current code for indexing into bucket arrays is quite tricky and unsafe,
partly because it has to keep manually assuring the compiler that a bucket
index is always less than 21.

By encapsulating that knowledge in a 21-value enum, we can make the code
clearer and safer, without giving up performance.

Having a dedicated `BucketIndex` type could also help with further cleanups of
`VecCache` indexing.
2026-03-24 17:52:09 +11:00
Daria Sukhonina 50c564e679 Make par_slice consistent with single-threaded execution 2026-03-21 08:23:42 +03:00
Jonathan Brouwer 8a30071e7c Rollup merge of #153776 - zetanumbers:curry-howard-dyn-thread-safe, r=JonathanBrouwer
Remove redundant `is_dyn_thread_safe` checks

Refactor uses of `FromDyn` to reduce number of redundant `is_dyn_thread_safe` checks by replacing `FromDyn::from` with `check_dyn_thread_safe` in tandem with existing `FromDyn::derive` so that the users would avoid redundancy in the future.

PR is split up into multiple commits for an easier review.
2026-03-20 13:24:24 +01:00
xtqqczze b9cd5923ae Replace truncate(0) with clear() 2026-03-19 13:55:02 +00:00
Zalathar fb850aebcd Remove unused types UnusedGenericParams and FiniteBitSet
These types have been unused since polymorphization was removed in
<https://github.com/rust-lang/rust/pull/133883>.
2026-03-18 10:36:19 +11:00
Daria Sukhonina 3973d9f558 Remove FromDyn::from and add check_dyn_thread_safe 2026-03-17 14:46:12 +03:00
Daria Sukhonina 15f42e2613 Move FromDyn closer to is_dyn_thread_safe 2026-03-17 14:33:31 +03:00
Stuart Cook 76b769c8ad Rollup merge of #152621 - petrochenkov:graph2, r=Zalathar
LinkedGraph: support adding nodes and edges in arbitrary order

If an edge uses some not-yet-known node, we just leave the node's data empty, that data can be added later.

Use this support to avoid skipping edges in RetainedDepGraph.

This is continuation of https://github.com/rust-lang/rust/pull/152590, that PR just fixes the ICE, this PR also preserves all the edges in debug dumps.
This is also a minimized version of https://github.com/rust-lang/rust/pull/151821 with a smaller amount of data structure hacks.
2026-03-14 17:30:22 +11:00
John Kåre Alsaker bee28ef7b2 Remove MTLock 2026-03-12 14:37:51 +01:00
Jonathan Brouwer 707bf0961a Rollup merge of #152569 - oli-obk:rustc_layout_scalar_valid_range_end_end, r=davidtwco
Stop using rustc_layout_scalar_valid_range_* in rustc

*[View all comments](https://triagebot.infra.rust-lang.org/gh-comments/rust-lang/rust/pull/152569)*

Another step towards rust-lang/rust#135996

Required some manual impls, but we already do many manual impls for the newtype_index types, so it's not really a new maintenance burden.
2026-03-11 22:05:41 +01:00
Zalathar 985b41d387 Remove the rustc_data_structures::assert_matches! re-exports 2026-03-08 22:02:23 +11:00
Josh Stone 32bae1353e Update cfg(bootstrap) 2026-03-07 10:42:02 -08:00
Oli Scherer ff7e604154 Stop using rustc_layout_scalar_valid_range_* in rustc 2026-03-07 15:36:56 +00:00
Zalathar 743d442845 Rename QueryCache::iter to for_each
Methods that perform internal iteration are typically named `for_each`.
2026-03-05 23:39:59 +11:00
Jonathan Brouwer ef4cff2ea3 Rollup merge of #153015 - joboet:atomic_alias_generic, r=jhpratt
core: make atomic primitives type aliases of `Atomic<T>`

Tracking issue: https://github.com/rust-lang/rust/issues/130539

This makes `AtomicI32` and friends type aliases of `Atomic<T>` by encoding their alignment requirements via the use of an internal `Storage` associated type. This is also used to encode that `AtomicBool` store a `u8` internally.

Modulo the `Send`/`Sync` implementations, this PR does not move any trait implementations, methods or associated functions – I'll leave that for another PR.
2026-03-02 20:10:34 +01:00
joboet 4b86b1a11a compiler: manually implement DynSend for atomic primitives
The manual `DynSend` implementation for `AtomicPtr` blocks the
auto-implementation for other atomic primitives since they forward to the same
`Atomic<T>` type now. This breakage cannot occur in user code as it depends on
`DynSend` being a custom auto-trait.
2026-03-02 00:23:23 +01:00
Josh Stone 696a7a1f04 Simplify AppendOnlyVec iterators 2026-02-25 08:09:43 -08:00
Stuart Cook 8c96521c99 Rollup merge of #152987 - Zoxc:hashstable-derives, r=JonathanBrouwer
Use `HashStable` derive in more places

This applies `HashStable` derive in a couple more places. Also `stable_hasher` is declared for `HashStable_NoContext`.
2026-02-23 13:32:01 +11:00
John Kåre Alsaker 912c7d31d2 Use HashStable derive in more places 2026-02-22 21:01:27 +01:00
Folkert de Vries 14d29f9ae2 Stabilize cfg_select 2026-02-22 19:59:25 +01:00
Vadim Petrochenkov 96697d41ed LinkedGraph: support adding nodes and edges in arbitrary order
If an edge uses some not-yet-known node, we just leave the node's data empty, that data can be added later.

Use this support to avoid skipping edges in DepGraphQuery
2026-02-22 10:53:56 +03:00
Vadim Petrochenkov 71d656edde LinkedGraph: Use IndexVec instead of Vec in LinkedGraph nodes 2026-02-22 10:46:40 +03:00
Daria Sukhonina 33a8dc10cf Fix wrong par_slice implementation 2026-02-16 17:49:42 +03:00
Zalathar 65443dc707 Unalign PackedFingerprint on all hosts, not just x86 and x86-64 2026-02-16 22:10:26 +11:00
bors 4c37f6d78c Auto merge of #152375 - Zoxc:rayon-scope-loops, r=jieyouxu,lqd
Use `scope` for `par_slice` instead of `join`

This uses `scope` instead of nested `join`s in `par_slice` so that each group of items are independent and do not end up blocking on another.
2026-02-15 09:55:40 +00:00
bors 75b9d89c68 Auto merge of #151380 - ShoyuVanilla:shallow-resolve-to-root-var, r=lcnr
Shallow resolve ty and const vars to their root vars

Continuation of https://github.com/rust-lang/rust/pull/147193
2026-02-15 03:04:28 +00:00
Jonathan Brouwer 65d982abd8 Rollup merge of #152469 - mu001999-contrib:cleanup/unused-features, r=nadrieril,jdonszelmann
Remove unused features

Detected by https://github.com/rust-lang/rust/pull/152164.

~~Only allow `unused_features` if there are complex platform-specific features enabled.~~
2026-02-13 13:34:58 +01:00
Stuart Cook 09720ec3d0 Rollup merge of #152329 - Zoxc:simple-parallel-macro, r=nnethercote
Simplify parallel! macro

This replaces the `parallel!` macro with a `par_fns` function.
2026-02-13 15:19:12 +11:00
mu001999 a07f837491 Remove unused features in compiler 2026-02-13 09:25:39 +08:00