Commit Graph

863 Commits

Author SHA1 Message Date
Markus Westerlind eb7ed0c917 perf: Lazily recive the Rollback argument in rollback_to 2020-05-05 11:24:36 +02:00
Markus Westerlind 0c5d833812 Move projection_cache into the combined undo log 2020-05-05 11:24:23 +02:00
Markus Westerlind c50fc6e113 Allow SnapshotMap to have a separate undo_log 2020-05-05 11:24:22 +02:00
Markus Westerlind caacdd2024 Move region_constraint to the unified undo log 2020-05-05 11:23:54 +02:00
Markus Westerlind 1506b1fc6a perf: Reduce snapshot/rollback overhead
By merging the undo_log of all structures part of the snapshot the cost
of creating a snapshot becomes much cheaper. Since snapshots with no or
few changes are so frequent this ends up mattering more than the slight
overhead of dispatching on the variants that map to each field.
2020-05-05 10:03:13 +02:00
Tshepang Lekhonkhobe 3be52b5941 fix rustdoc warnings 2020-05-02 10:41:04 +02:00
Dylan DPC 09f3c908bb Rollup merge of #70950 - nikomatsakis:leak-check-nll-2, r=matthewjasper
extend NLL checker to understand `'empty` combined with universes

This PR extends the NLL region checker to understand `'empty` combined with universes. In particular, it means that the NLL region checker no longer considers `exists<R2> { forall<R1> { R1: R2 } }` to be provable. This is work towards https://github.com/rust-lang/rust/issues/59490, but we're not all the way there. One thing in particular it does not address is error messages.

The modifications to the NLL region inference code turned out to be simpler than expected. The main change is to require that if `R1: R2` then `universe(R1) <= universe(R2)`.

This constraint follows from the region lattice (shown below), because we assume then that `R2` is "at least" `empty(Universe(R2))`, and hence if `R1: R2` (i.e., `R1 >= R2` on the lattice) then `R1` must be in some universe that can name `'empty(Universe(R2))`, which requires that `Universe(R1) <= Universe(R2)`.

```
static ----------+-----...------+       (greatest)
|                |              |
early-bound and  |              |
free regions     |              |
|                |              |
scope regions    |              |
|                |              |
empty(root)   placeholder(U1)   |
|            /                  |
|           /         placeholder(Un)
empty(U1) --         /
|                   /
...                /
|                 /
empty(Un) --------                      (smallest)
```

I also made what turned out to be a somewhat unrelated change to add a special region to represent `'empty(U0)`, which we use (somewhat hackily) to indicate well-formedness checks in some parts of the compiler. This fixes #68550.

I did some investigation into fixing the error message situation. That's a bit trickier: the existing "nice region error" code around placeholders relies on having better error tracing than NLL currently provides, so that it knows (e.g.) that the constraint arose from applying a trait impl and things like that. I feel like I was hoping *not* to do such fine-grained tracing in NLL, and it seems like we...largely...got away with that. I'm not sure yet if we'll have to add more tracing information or if there is some sort of alternative.

It's worth pointing out though that I've not kind of shifted my opinion on whose job it should be to enforce lifetimes: I tend to think we ought to be moving back towards *something like* the leak-check (just not the one we *had*). If we took that approach, it would actually resolve this aspect of the error message problem, because we would be resolving 'higher-ranked errors' in the trait solver itself, and hence we wouldn't have to thread as much causal information back to the region checker. I think it would also help us with removing the leak check while not breaking some of the existing crates out there.

Regardless, I think it's worth landing this change, because it was relatively simple and it aligns the set of programs that NLL accepts with those that are accepted by the main region checker, and hence should at least *help* us in migration (though I guess we still also have to resolve the existing crates that rely on leak check for coherence).

r? @matthewjasper
2020-04-30 20:15:20 +02:00
Alex Aktsipetrov 357f4ce431 Replace thread_local with generator resume arguments in box_region. 2020-04-25 18:19:27 +02:00
Dylan DPC 16be619c6a Rollup merge of #71369 - ctaggart:wasm32_profiling, r=ecstatic-morse
allow wasm32 compilation of librustc_data_structures/profiling.rs

I'm trying to use rustfmt from a wasm app. I ran into this compilation problem https://github.com/rust-lang/rustfmt/issues/4132 and after investigating, it looked like just adjusting a few cfg's. I based it on how measureme added support in https://github.com/rust-lang/measureme/pull/43.

My testing on my macbook was just that librustc_data_structures builds now with both:
- cargo build
- cargo build --target wasm32-unknown-unknown
2020-04-22 23:19:22 +02:00
Cameron Taggart 51b194f09a remove some extra } 2020-04-22 09:18:54 -06:00
Cameron Taggart 02241db720 suggested rearrangement of the cfg if statements
Co-Authored-By: ecstatic-morse <ecstaticmorse@gmail.com>
2020-04-22 09:12:44 -06:00
Cameron Taggart d5963ed0c4 accept cfg_if suggestion
Co-Authored-By: bjorn3 <bjorn3@users.noreply.github.com>
2020-04-21 16:36:08 -06:00
Cameron Taggart f72de476b7 use cfg_if! and use FileSerializationSink for wasi 2020-04-21 13:07:05 -06:00
Niko Matsakis cb9458d3ff sccs are computed in dependency order
We don't need the `scc_dependency_order` vector, `all_sccs` is already
in dependency order.
2020-04-21 08:57:14 +00:00
Cameron Taggart 6fb524a455 ./x.py fmt 2020-04-20 20:47:27 -06:00
Cameron Taggart df3776bc0f allow wasm32 compilation of librustc_data_structures/profiling.rs 2020-04-20 18:09:11 -06:00
Shotaro Yamada fae4e2a155 Remove unused ToHex/FromHex trait 2020-04-20 17:59:27 +09:00
Yuki Okushi 58ad251ea8 Move MapInPlace to rustc_data_structures 2020-04-18 13:02:33 +09:00
Ana-Maria Mihalache 8f081d5b2b rustc_target::abi: add Primitive variant to FieldsShape. 2020-04-16 15:15:51 +00:00
Luca Barbieri 45ede927fb Depend on libc from crates.io 2020-04-11 11:07:04 -04:00
Dylan MacKenzie 0fc0f34ae4 Use tri-color search for unconditional recursion lint 2020-04-09 21:07:48 -07:00
Linus Färnstrand fcf45999f7 Stop importing int/float modules in librustc_* 2020-04-05 11:22:01 +02:00
Mazdak Farrokhzad 7710f2dd5c rustc -> rustc_middle part 1 2020-03-30 07:02:56 +02:00
bors 0a2df62073 Auto merge of #69916 - oli-obk:mir_bless, r=eddyb
Enable blessing of mir opt tests

cc @rust-lang/wg-mir-opt
cc @RalfJung

Long overdue, but now you can finally just add a

```rust
// EMIT_MIR rustc.function_name.MirPassName.before.mir
```

(or `after.mir` since most of the time you want to know the MIR after a pass). A `--bless` invocation will automatically create the files for you.

I suggest we do this for all mir opt tests that have all of the MIR in their source anyway

If you use `rustc.function.MirPass.diff` you only get the diff that the MIR pass causes on the MIR.

Fixes #67865
2020-03-27 12:58:34 +00:00
Oliver Scherer c9a5a03ffd Enable --blessing of MIR dumps 2020-03-26 15:26:33 +01:00
Mateusz Mikuła f5e702df0e Upgrade rustc and bootstrap dependencies 2020-03-26 14:11:23 +01:00
Mazdak Farrokhzad c984a96189 Rollup merge of #70269 - matthiaskrgr:clippy_closures, r=Dylan-DPC
remove redundant closures (clippy::redundant_closure)
2020-03-23 04:26:15 +01:00
Matthias Krüger 263cbd1bbe remove redundant closures (clippy::redundant_closure) 2020-03-22 12:43:19 +01:00
Thomas Bächler c8140a88f6 Return NonZeroU64 from ThreadId::as_u64.
As discussed in #67939, this allows turning Option<ThreadId> into Option<NonZeroU64> which
can then be stored inside an AtomicU64.
2020-03-21 19:48:23 +01:00
Matthias Krüger ad00e91887 remove redundant returns (clippy::needless_return) 2020-03-20 20:23:03 +01:00
Jonas Schievink f53f9a88f1 Bump the bootstrap compiler 2020-03-15 19:43:25 +01:00
Dylan DPC bf6e715fa0 Rollup merge of #69967 - mark-i-m:rinfctx, r=matthewjasper
Remove a few `Rc`s from RegionInferenceCtxt

fixes https://github.com/rust-lang/rust/issues/55853

r? @matthewjasper
2020-03-15 02:44:18 +01:00
Mark Mansi a58b17f2b5 update rustdocs for frozen 2020-03-13 13:36:16 -05:00
Mark Mansi da4e33a9e6 move frozen to rustc_data_structures 2020-03-13 13:28:25 -05:00
Matthias Krüger 7b1b08cfee remove lifetimes that can be elided (clippy::needless_lifetimes) 2020-03-12 20:03:09 +01:00
Matthias Krüger 136ad015b6 fix various typos 2020-03-06 15:19:31 +01:00
Matthias Krüger 9523c89f18 use is_empty() instead of len() == x to determine if structs are empty. 2020-02-28 15:16:27 +01:00
Yuki Okushi 7824a9d47d Rollup merge of #69500 - cuviper:par_for_each_in-item, r=Mark-Simulacrum
Simplify the signature of par_for_each_in

Given `T: IntoIterator`/`IntoParallelIterator`, `T::Item` is
unambiguous, so we don't need the explicit trait casting.
2020-02-27 14:38:09 +09:00
Josh Stone 3d47ebeb0e Simplify the signature of par_for_each_in
Given `T: IntoIterator`/`IntoParallelIterator`, `T::Item` is
unambiguous, so we don't need the explicit trait casting.
2020-02-26 15:08:21 -08:00
Matthias Krüger 1a6b930eeb clarify operator precedence 2020-02-26 12:43:37 +01:00
Matthias Krüger 5ae4500eff remove redundant clones in librustc_mir_build and librustc_data_structures 2020-02-24 14:56:29 +01:00
bors 03d2f5cd6c Auto merge of #69332 - nnethercote:revert-u8to64_le-changes, r=michaelwoerister
Revert `u8to64_le` changes from #68914.

`SipHasher128`'s `u8to64_le` function was simplified in #68914.
Unfortunately, the new version is slower, because it introduces `memcpy`
calls with non-statically-known lengths.

This commit reverts the change, and adds an explanatory comment (which
is also added to `libcore/hash/sip.rs`). This barely affects
`SipHasher128`'s speed because it doesn't use `u8to64_le` much, but it
does result in `SipHasher128` once again being consistent with
`libcore/hash/sip.rs`.

r? @michaelwoerister
2020-02-22 07:26:58 +00:00
Nicholas Nethercote 100ff5a256 Revert u8to64_le changes from #68914.
`SipHasher128`'s `u8to64_le` function was simplified in #68914.
Unfortunately, the new version is slower, because it introduces `memcpy`
calls with non-statically-known lengths.

This commit reverts the change, and adds an explanatory comment (which
is also added to `libcore/hash/sip.rs`). This barely affects
`SipHasher128`'s speed because it doesn't use `u8to64_le` much, but it
does result in `SipHasher128` once again being consistent with
`libcore/hash/sip.rs`.
2020-02-21 10:11:35 +11:00
bors d3ebd592d0 Auto merge of #69072 - ecstatic-morse:associated-items, r=petrochenkov
O(log n) lookup of associated items by name

Resolves #68957, in which compile time is quadratic in the number of associated items. This PR makes name lookup use binary search instead of a linear scan to improve its asymptotic performance. As a result, the pathological case from that issue now runs in 8 seconds on my local machine, as opposed to many minutes on the current stable.

Currently, method resolution must do a linear scan through all associated items of a type to find one with a certain name. This PR changes the result of the `associated_items` query to a data structure that preserves the definition order of associated items (which is used, e.g., for the layout of trait object vtables) while adding an index of those items sorted by (unhygienic) name. When doing name lookup, we first find all items with the same `Symbol` using binary search, then run hygienic comparison to find the one we are looking for. Ideally, this would be implemented using an insertion-order preserving, hash-based multi-map, but one is not readily available.

Someone who is more familiar with identifier hygiene could probably make this better by auditing the uses of the `AssociatedItems` interface. My goal was to preserve the current behavior exactly, even if it seemed strange (I left at least one FIXME to this effect). For example, some places use comparison with `ident.modern()` and some places use `tcx.hygienic_eq` which requires the `DefId` of the containing `impl`. I don't know whether those approaches are equivalent or which one should be preferred.
2020-02-20 22:44:01 +00:00
Dylan MacKenzie ea3c9d27cf Implement an insertion-order preserving, efficient multi-map 2020-02-19 10:51:40 -08:00
John Kåre Alsaker a8522256c5 Tune inlining 2020-02-19 16:03:21 +01:00
Yuki Okushi 2c5bdee9e4 Rollup merge of #68475 - Aaron1011:fix/forest-caching, r=nikomatsakis
Use a `ParamEnvAnd<Predicate>` for caching in `ObligationForest`

Previously, we used a plain `Predicate` to cache results (e.g. successes
and failures) in ObligationForest. However, fulfillment depends on the
precise `ParamEnv` used, so this is unsound in general.

This commit changes the impl of `ForestObligation` for
`PendingPredicateObligation` to use `ParamEnvAnd<Predicate>` instead of
`Predicate` for the associated type. The associated type and method are
renamed from 'predicate' to 'cache_key' to reflect the fact that type is
no longer just a predicate.
2020-02-15 07:17:45 +09:00
bors 21ed50522d Auto merge of #68693 - Zoxc:query-no-arc, r=michaelwoerister
Construct query job latches on-demand

r? @michaelwoerister
2020-02-14 01:37:50 +00:00
Andreas Jonson cec0ed0219 add selfprofiling for new llvm passmanager 2020-02-13 08:02:18 +01:00
Dylan DPC f2d829ce6a Rollup merge of #68914 - nnethercote:speed-up-SipHasher128, r=michaelwoerister
Speed up `SipHasher128`.

The current code in `SipHasher128::short_write` is inefficient. It uses
`u8to64_le` (which is complex and slow) to extract just the right number of
bytes of the input into a u64 and pad the result with zeroes. It then
left-shifts that value in order to bitwise-OR it with `self.tail`.

For example, imagine we have a u32 input `0xIIHH_GGFF` and only need three bytes
to fill up `self.tail`. The current code uses `u8to64_le` to construct
`0x0000_0000_00HH_GGFF`, which is just `0xIIHH_GGFF` with the `0xII` removed and
zero-extended to a u64. The code then left-shifts that value by five bytes --
discarding the `0x00` byte that replaced the `0xII` byte! -- to give
`0xHHGG_FF00_0000_0000`. It then then ORs that value with `self.tail`.

There's a much simpler way to do it: zero-extend to u64 first, then left shift.
E.g. `0xIIHH_GGFF` is zero-extended to `0x0000_0000_IIHH_GGFF`, and then
left-shifted to `0xHHGG_FF00_0000_0000`. We don't have to take time to exclude
the unneeded `0xII` byte, because it just gets shifted out anyway! It also avoids
multiple occurrences of `unsafe`.

There's a similar story with the setting of `self.tail` at the method's end.
The current code uses `u8to64_le` to extract the remaining part of the input,
but the same effect can be achieved more quickly with a right shift on the
zero-extended input.

This commit changes `SipHasher128` to use the simpler shift-based approach. The
code is also smaller, which means that `short_write` is now inlined where
previously it wasn't, which makes things faster again. This gives big
speed-ups for all incremental builds, especially "baseline" incremental
builds.

r? @michaelwoerister
2020-02-12 14:21:06 +01:00