Commit Graph

129 Commits

Author SHA1 Message Date
Ralf Jung 986a280644 simd_fmin/fmax: make semantics and name consistent with scalar intrinsics 2026-03-18 15:17:56 +01:00
Redddy 50db919f5d Change TODO in compiler to FIXME 2026-03-07 12:12:33 +00:00
Jonathan Brouwer ad4b2c01a1 Rollup merge of #153046 - bjorn3:cg_ssa_cleanups, r=TaKO8Ki
Couple of cg_ssa refactorings

These should help a bit with using cg_ssa in cg_clif at some point in the future.
2026-03-02 09:49:22 +01:00
Jacob Pratt cb78bc4dd4 Rollup merge of #151771 - hoodmane:wasm-double-panic, r=bjorn3
Fix: On wasm targets, call `panic_in_cleanup` if panic occurs in cleanup

Previously this was not correctly implemented. Each funclet may need its own terminate block, so this changes the `terminate_block` into a `terminate_blocks` `IndexVec` which can have a terminate_block for each funclet. We key on the first basic block of the funclet -- in particular, this is the start block for the old case of the top level terminate function.

I also fixed the `terminate` handler to not be invoked when a foreign exception is raised, mimicking the behavior from msvc. On wasm, in order to avoid generating a `catch_all` we need to call `llvm.wasm.get.exception` and `llvm.wasm.get.ehselector`.
2026-02-25 21:42:53 -05:00
Hood Chatham acbfd79acf Fix: On wasm targets, call panic_in_cleanup if panic occurs in cleanup
Previously this was not correctly implemented. Each funclet may need its own terminate
block, so this changes the `terminate_block` into a `terminate_blocks` `IndexVec` which
can have a terminate_block for each funclet. We key on the first basic block of the
funclet -- in particular, this is the start block for the old case of the top level
terminate function.

Rather than using a catchswitch/catchpad pair, I used a cleanuppad. The reason for the
pair is to avoid catching foreign exceptions on MSVC. On wasm, it seems that the
catchswitch/catchpad pair is optimized back into a single cleanuppad and a catch_all
instruction is emitted which will catch foreign exceptions. Because the new logic is
only used on wasm, it seemed better to take the simpler approach seeing as they do the
same thing.
2026-02-24 17:47:27 +01:00
bjorn3 f03581a12b Introduce FunctionSignature associated type for BackendTypes
In Cranelift the regular Type enum can't represent function signatures.
Function pointers are represented as plain pointer sized integer.
2026-02-24 15:40:43 +00:00
bjorn3 e94aaf136d Reorder associated types in BackendTypes to be a bit more logical 2026-02-24 10:39:03 +00:00
bjorn3 e33e56225c Merge typeid_metadata into type_checked_load
This allows removing the Metadata associated type from BackendTypes.
2026-02-24 10:36:50 +00:00
Guillaume Gomez 6ea369cfee Merge commit '70ae207ff5dcd67931c0e67cb62ebceec75a7436' into subtree-update_cg_gcc_2026-02-14 2026-02-14 16:45:12 +01:00
bjorn3 9bcd6ed4c9 Partially inline get_fn_addr/get_fn in codegen_llvm_intrinsic_call
This moves all LLVM intrinsic handling out of the regular call path for
cg_gcc and makes it easier to hook into this code for future cg_llvm
changes.
2025-12-27 17:46:26 +00:00
bjorn3 c506e23426 Pass Function to Builder::function_call in cg_gcc 2025-12-27 17:46:26 +00:00
bjorn3 b40eae863a Don't assume get_fn is only called from codegen_mir in cg_gcc 2025-12-27 17:46:26 +00:00
David Wood a56b1b9283 codegen: implement repr(scalable)
Introduces `BackendRepr::ScalableVector` corresponding to scalable
vector types annotated with `repr(scalable)` which lowers to a scalable
vector type in LLVM.

Co-authored-by: Jamie Cunliffe <Jamie.Cunliffe@arm.com>
2025-12-16 11:00:12 +00:00
Guillaume Gomez 4501327be4 Fix clippy lint in cg_gcc 2025-12-09 16:26:10 +01:00
AudaciousAxiom d3653e9f59 Remove an unnecessary unwrap in rustc_codegen_gcc 2025-11-29 10:34:07 +01:00
Guillaume Gomez 9fce61b21b Merge commit '55171cfa8b53970e06dcd2f5f2e2b8d82eeb0885' into subtree-update_cg_gcc_2025-11-26 2025-11-26 19:41:43 +01:00
bors 2286e5d224 Auto merge of #148481 - GuillaumeGomez:subtree-update_cg_gcc_2025-11-04, r=GuillaumeGomez
Sync rustc_codegen_gcc subtree

cc `@antoyo`

r? ghost
2025-11-13 18:00:02 +00:00
Jeremy Fitzhardinge 5f29f11a4d Add -Zannotate-moves for profiler visibility of move/copy operations
This implements a new unstable compiler flag `-Zannotate-moves` that makes
move and copy operations visible in profilers by creating synthetic debug
information. This is achieved with zero runtime cost by manipulating debug
info scopes to make moves/copies appear as calls to `compiler_move<T, SIZE>`
and `compiler_copy<T, SIZE>` marker functions in profiling tools.

This allows developers to identify expensive move/copy operations in their
code using standard profiling tools, without requiring specialized tooling
or runtime instrumentation.

The implementation works at codegen time. When processing MIR operands
(`Operand::Move` and `Operand::Copy`), the codegen creates an `OperandRef`
with an optional `move_annotation` field containing an `Instance` of the
appropriate profiling marker function. When storing the operand,
`store_with_annotation()` wraps the store operation in a synthetic debug
scope that makes it appear inlined from the marker.

Two marker functions (`compiler_move` and `compiler_copy`) are defined
in `library/core/src/profiling.rs`. These are never actually called -
they exist solely as debug info anchors.

Operations are only annotated if the type:
   - Meets the size threshold (default: 65 bytes, configurable via
     `-Zannotate-moves=SIZE`)
   - Has a non-scalar backend representation (scalars use registers,
     not memcpy)

This has a very small size impact on object file size. With the default
limit it's well under 0.1%, and even with a very small limit of 8 bytes
it's still ~1.5%. This could be enabled by default.
2025-11-06 15:39:45 -08:00
Guillaume Gomez ca3e2ea0c2 Merge commit 'e785c50dad2944c7b4fb2d929de531f859db1e99' into subtree-update_cg_gcc_2025-11-04 2025-11-04 16:01:49 +01:00
Karan Janthe 664e83b3e7 added typetree support for memcpy 2025-09-19 04:02:20 +00:00
Trevor Gross 6fa6a854cd Rollup merge of #144192 - RalfJung:atomicrmw-ptr, r=nikic
atomicrmw on pointers: move integer-pointer cast hacks into backend

Conceptually, we want to have atomic operations on pointers of the form `fn atomic_add(ptr: *mut T, offset: usize, ...)`. However, LLVM does not directly support such operations (https://github.com/llvm/llvm-project/issues/120837), so we have to cast the `offset` to a pointer somewhere.

This PR moves that hack into the LLVM backend, so that the standard library, intrinsic, and Miri all work with the conceptual operation we actually want. Hopefully, one day LLVM will gain a way to represent these operations without integer-pointer casts, and then the hack will disappear entirely.

Cc ```@nikic``` -- this is the best we can do right now, right?
Fixes https://github.com/rust-lang/rust/issues/134617
2025-08-08 14:22:44 -05:00
Guillaume Gomez 21bd67796e Merge commit '482e8540a1b757ed7bccc2041c5400f051fdb01e' into subtree-update_cg_gcc_2025-08-04 2025-08-04 10:49:43 +02:00
Joel Wejdenstål a448837045 Implement support for explicit tail calls in the MIR block builders and the LLVM codegen backend. 2025-07-26 01:02:29 +02:00
Ralf Jung de1b999ff6 atomicrmw on pointers: move integer-pointer cast hacks into backend 2025-07-23 08:32:55 +02:00
Guillaume Gomez 66017df336 Merge commit 'f682d09eefc6700b9e5851ef193847959acf4fac' into subtree-update_cg_gcc_2025-07-18 2025-07-18 18:31:20 +02:00
mejrs 49421d1fa3 Remove support for dynamic allocas 2025-07-07 23:04:06 +02:00
Guillaume Gomez 666934ac54 Merge commit '4b5c44b14166083eef8d71f15f5ea1f53fc976a0' into subtree-update_cg_gcc_2025-06-30 2025-06-30 16:12:42 +02:00
Guillaume Gomez 3dcbc1e5bc Remove () returned value 2025-06-29 00:20:15 +02:00
Guillaume Gomez a420d39afb Fix signature of filter_landing_pad 2025-06-29 00:00:58 +02:00
Guillaume Gomez 13d0ef30c9 Remove unwanted semi-colon in rustc_codegen_gcc 2025-06-28 23:50:00 +02:00
Guillaume Gomez f8491b11f3 Merge commit 'b7091eca6d8eb0fe88b58cc9a7aec405d8de5b85' into subtree-update_cg_gcc_2025-06-28 2025-06-28 23:37:08 +02:00
Mark Rousskov a46ef2d01e Remove dead instructions in terminate blocks 2025-06-22 11:38:47 -04:00
Guillaume Gomez c48d8d4d80 Merge commit 'fda0bb9588912a3e0606e880ca9f6e913cf8a5a4' into subtree-update_cg_gcc_2025-06-18 2025-06-18 15:11:44 +02:00
León Orell Valerian Liehr d6dc9656ea Rollup merge of #133952 - bjorn3:remove_wasm_legacy_abi, r=alexcrichton
Remove wasm legacy abi

Closes https://github.com/rust-lang/rust/issues/122532
Closes https://github.com/rust-lang/rust/issues/138762
Fixes https://github.com/rust-lang/rust/issues/71871
https://github.com/rust-lang/rust/issues/88152
Fixes https://github.com/rust-lang/rust/issues/115666
Fixes https://github.com/rust-lang/rust/issues/129486
2025-06-15 23:51:53 +02:00
bjorn3 3e944fa391 Remove all support for wasm's legacy ABI 2025-06-14 09:57:06 +00:00
sayantn d56fcd968d Simplify implementation of Rust intrinsics by using type parameters in the cache 2025-06-12 00:32:42 +05:30
Ralf Jung a387c86a92 get rid of rustc_codegen_ssa::common::AtomicOrdering 2025-05-28 22:57:55 +02:00
Guillaume Gomez 1c4ab86955 Merge commit '6ba33f5e1189a5ae58fb96ce3546e76b13d090f5' into subtree-update_cg_gcc_2025-05-14 2025-05-14 13:51:02 +02:00
Ralf Jung 79dfd0a472 remove 'unordered' atomic intrinsics 2025-05-09 17:39:52 +02:00
Matthias Krüger 555df301f8 Rollup merge of #134232 - bjorn3:naked_asm_improvements, r=wesleywiser
Share the naked asm impl between cg_ssa and cg_clif

This was introduced in https://github.com/rust-lang/rust/pull/128004.
2025-04-30 17:27:57 +02:00
Guillaume Gomez e4ea67b3d7 Merge commit 'db1a31c243a649e1fe20f5466ba181da5be35c14' into subtree-update_cg_gcc_2025-04-18 2025-04-18 21:20:11 +02:00
bjorn3 421f22e8bf Pass &mut self to codegen_global_asm 2025-04-14 09:38:04 +00:00
Thalia Archibald 38fad984c6 compiler: Use size_of from the prelude instead of imported
Use `std::mem::{size_of, size_of_val, align_of, align_of_val}` from the
prelude instead of importing or qualifying them.

These functions were added to all preludes in Rust 1.80.
2025-03-07 13:37:04 -08:00
Scott McMurray 6f9cfd694d Rework OperandRef::extract_field to stop calling to_immediate_scalar on things which are already immediates
That means it stops trying to truncate things that are already `i1`s.
2025-02-19 12:03:40 -08:00
Scott McMurray 511bf307f0 Emit trunc nuw for unchecked shifts and to_immediate_scalar
- For shifts this shrinks the IR by no longer needing an `assume` while still providing the UB information
- Having this on the `i8`→`i1` truncations will hopefully help with some places that have to load `i8`s or pass those in LLVM structs without range information
2025-02-19 11:36:52 -08:00
Scott McMurray 9ad6839f7a Set both nuw and nsw in slice size calculation
There's an old note in the code to do this, and now that LLVM-C has an API for it, we might as well.
2025-02-13 21:26:48 -08:00
bjorn3 1fcae03369 Rustfmt 2025-02-08 22:12:13 +00:00
Jubilee Young e42cb3bedc cg_gcc: Directly use rustc_abi instead of reexports 2025-02-04 22:31:56 -08:00
Antoni Boucher 06f0a9bc78 Merge commit '59a81c2ca1edc88ad3ac4b27a8e03977ffb8e73a' into subtree-update_cg_gcc_2025_01_12 2025-01-13 10:53:58 -05:00
lcnr 9cba14b95b use TypingEnv when no infcx is available
the behavior of the type system not only depends on the current
assumptions, but also the currentnphase of the compiler. This is
mostly necessary as we need to decide whether and how to reveal
opaque types. We track this via the `TypingMode`.
2024-11-18 10:38:56 +01:00