Commit Graph

40 Commits

Author SHA1 Message Date
Manish Goregaokar 4ef6847d4d Use memchr for [i8]::contains as well 2017-12-31 20:35:39 +05:30
Manish Goregaokar f8f28886e0 Use memchr in [u8]::contains 2017-12-13 09:11:42 -06:00
Manish Goregaokar 2bf0df777b Move rust memchr impl to libcore 2017-12-13 01:15:18 -06:00
bors b32267f2c1 Auto merge of #45595 - scottmcm:iter-try-fold, r=dtolnay
Short-circuiting internal iteration with Iterator::try_fold & try_rfold

These are the core methods in terms of which the other methods (`fold`, `all`, `any`, `find`, `position`, `nth`, ...) can be implemented, allowing Iterator implementors to get the full goodness of internal iteration by only overriding one method (per direction).

Based off the `Try` trait, so works with both `Result` and `Option` (🎉 https://github.com/rust-lang/rust/pull/42526).  The `try_fold` rustdoc examples use `Option` and the `try_rfold` ones use `Result`.

AKA continuing in the vein of PRs https://github.com/rust-lang/rust/pull/44682 & https://github.com/rust-lang/rust/pull/44856 for more of `Iterator`.

New bench following the pattern from the latter of those:
```
test iter::bench_take_while_chain_ref_sum          ... bench:   1,130,843 ns/iter (+/- 25,110)
test iter::bench_take_while_chain_sum              ... bench:     362,530 ns/iter (+/- 391)
```

I also ran the benches without the `fold` & `rfold` overrides to test their new default impls, with basically no change.  I left them there, though, to take advantage of existing overrides and because `AlwaysOk` has some sub-optimality due to https://github.com/rust-lang/rust/issues/43278 (which 45225 should fix).

If you're wondering why there are three type parameters, see issue https://github.com/rust-lang/rust/issues/45462

Thanks for @bluss for the [original IRLO thread](https://internals.rust-lang.org/t/pre-rfc-fold-ok-is-composable-internal-iteration/4434) and the rfold PR and to @cuviper for adding so many folds, [encouraging me](https://github.com/rust-lang/rust/pull/45379#issuecomment-339424670) to make this PR, and finding a catastrophic bug in a pre-review.
2017-11-17 07:43:08 +00:00
bors 24bb4d1e75 Auto merge of #45333 - alkis:master, r=bluss
Improve SliceExt::binary_search performance

Improve the performance of binary_search by reducing the number of unpredictable conditional branches in the loop. In addition improve the benchmarks to test performance in l1, l2 and l3 caches on sorted arrays with or without dups.

Before:

```
test slice::binary_search_l1                               ... bench:          48 ns/iter (+/- 1)
test slice::binary_search_l2                               ... bench:          63 ns/iter (+/- 0)
test slice::binary_search_l3                               ... bench:         152 ns/iter (+/- 12)
test slice::binary_search_l1_with_dups                     ... bench:          36 ns/iter (+/- 0)
test slice::binary_search_l2_with_dups                     ... bench:          64 ns/iter (+/- 1)
test slice::binary_search_l3_with_dups                     ... bench:         153 ns/iter (+/- 6)
```

After:

```
test slice::binary_search_l1                               ... bench:          15 ns/iter (+/- 0)
test slice::binary_search_l2                               ... bench:          23 ns/iter (+/- 0)
test slice::binary_search_l3                               ... bench:         100 ns/iter (+/- 17)
test slice::binary_search_l1_with_dups                     ... bench:          15 ns/iter (+/- 0)
test slice::binary_search_l2_with_dups                     ... bench:          23 ns/iter (+/- 0)
test slice::binary_search_l3_with_dups                     ... bench:          98 ns/iter (+/- 14)
```
2017-11-11 18:17:14 +00:00
Alkis Evlogimenos 2ca111b6b9 Improve the performance of binary_search by reducing the number of
unpredictable conditional branches in the loop. In addition improve the
benchmarks to test performance in l1, l2 and l3 caches on sorted arrays
with or without dups.

Before:

```
test slice::binary_search_l1                               ... bench:  48 ns/iter (+/- 1)
test slice::binary_search_l2                               ... bench:  63 ns/iter (+/- 0)
test slice::binary_search_l3                               ... bench: 152 ns/iter (+/- 12)
test slice::binary_search_l1_with_dups                     ... bench:  36 ns/iter (+/- 0)
test slice::binary_search_l2_with_dups                     ... bench:  64 ns/iter (+/- 1)
test slice::binary_search_l3_with_dups                     ... bench: 153 ns/iter (+/- 6)
```

After:

```
test slice::binary_search_l1                               ... bench:  15 ns/iter (+/- 0)
test slice::binary_search_l2                               ... bench:  23 ns/iter (+/- 0)
test slice::binary_search_l3                               ... bench: 100 ns/iter (+/- 17)
test slice::binary_search_l1_with_dups                     ... bench:  15 ns/iter (+/- 0)
test slice::binary_search_l2_with_dups                     ... bench:  23 ns/iter (+/- 0)
test slice::binary_search_l3_with_dups                     ... bench:  98 ns/iter (+/- 14)
```
2017-11-11 16:00:26 +01:00
Badel2 4bd6be9dc6 Inclusive range updated to ..= syntax 2017-11-06 13:43:59 +01:00
whitequark 1cc88be2eb De-stabilize core::slice::{from_ref, from_ref_mut}. 2017-11-01 22:21:29 +00:00
Scott McMurray eef4d42a3f Fundamental internal iteration with try_fold
This is the core method in terms of which the other methods (fold, all, any, find, position, nth, ...) can be implemented, allowing Iterator implementors to get the full goodness of internal iteration by only overriding one method (per direction).
2017-10-29 15:45:20 -07:00
whitequark 8431811728 Bring back slice::ref_slice as slice::from_ref.
These functions were deprecated and removed in 1.5, but such simple
functionality shouldn't require using unsafe code, and it isn't
cluttering libstd too much.
2017-10-23 22:53:31 +00:00
Niv Kaminer ff99111f48 address some FIXMEs whose associated issues were marked as closed
remove FIXME(#13101) since `assert_receiver_is_total_eq` stays.
remove FIXME(#19649) now that stability markers render.
remove FIXME(#13642) now the benchmarks were moved.
remove FIXME(#6220) now that floating points can be formatted.
remove FIXME(#18248) and write tests for `Rc<str>` and `Rc<[u8]>`
remove reference to irelevent issues in FIXME(#1697, #2178...)
update FIXME(#5516) to point to getopts issue 7
update FIXME(#7771) to point to RFC 628
update FIXME(#19839) to point to issue 26925
2017-09-30 11:33:47 +03:00
Badel2 4737c5a068 Substitute ... with the expanded form
RangeInclusive { start, end }, this way we supress the warnings about `...` in expressions being deprecated until `..=` is available in the compiler
2017-09-22 22:05:18 +02:00
Alex Burka e64efc91f4 Add support for ..= syntax
Add ..= to the parser

Add ..= to libproc_macro

Add ..= to ICH

Highlight ..= in rustdoc

Update impl Debug for RangeInclusive to ..=

Replace `...` to `..=` in range docs

Make the dotdoteq warning point to the ...

Add warning for ... in expressions

Updated more tests to the ..= syntax

Updated even more tests to the ..= syntax

Updated the inclusive_range entry in unstable book
2017-09-22 22:05:18 +02:00
Chris Stankus e15a07ad64 Remove Borrow bound from SliceExt::binary_search 2017-08-30 12:02:25 -05:00
Scott McMurray c4cb2d1f2e Add [T]::swap_with_slice
The safe version of a method from ptr, like [T]::copy_from_slice
2017-08-21 22:20:00 -07:00
Zack M. Davis 1b6c9605e4 use field init shorthand EVERYWHERE
Like #43008 (f668999), but _much more aggressive_.
2017-08-15 15:29:17 -07:00
Alex Crichton 53d8b1d051 std: Cut down #[inline] annotations where not necessary
This PR cuts down on a large number of `#[inline(always)]` and `#[inline]`
annotations in libcore for various core functions. The `#[inline(always)]`
annotation is almost never needed and is detrimental to debug build times as it
forces LLVM to perform inlining when it otherwise wouldn't need to in debug
builds. Additionally `#[inline]` is an unnecessary annoation on almost all
generic functions because the function will already be monomorphized into other
codegen units and otherwise rarely needs the extra "help" from us to tell LLVM
to inline something.

Overall this PR cut the compile time of a [microbenchmark][1] by 30% from 1s to
0.7s.

[1]: https://gist.github.com/alexcrichton/a7d70319a45aa60cf36a6a7bf540dd3a
2017-07-20 12:01:32 -07:00
Stjepan Glavina bfbe4039f8 Fix tidy errors 2017-07-02 11:16:37 +02:00
Alex Crichton be7ebdd512 Bump version and stage0 compiler 2017-06-19 22:25:05 -07:00
bors 558cd1e393 Auto merge of #41670 - scottmcm:slice-rotate, r=alexcrichton
Add an in-place rotate method for slices to libcore

A helpful primitive for moving chunks of data around inside a slice.

For example, if you have a range selected and are drag-and-dropping it somewhere else (Example from [Sean Parent's talk](https://youtu.be/qH6sSOr-yk8?t=560)).

(If this should be an RFC instead of a PR, please let me know.)

Edit: changed example
2017-06-02 07:51:20 +00:00
Mark Simulacrum 00c87a6486 Rollup merge of #42134 - scottmcm:rangeinclusive-struct, r=aturon
Make RangeInclusive just a two-field struct

Not being an enum improves ergonomics and consistency, especially since NonEmpty variant wasn't prevented from being empty.  It can still be iterable without an extra "done" bit by making the range have !(start <= end), which is even possible without changing the Step trait.

Implements merged https://github.com/rust-lang/rfcs/pull/1980; tracking issue https://github.com/rust-lang/rust/issues/28237.

This is definitely a breaking change to anything consuming `RangeInclusive` directly (not as an Iterator) or constructing it without using the sugar.  Is there some change that would make sense before this so compilation failures could be compatibly fixed ahead of time?

r? @aturon (as FCP proposer on the RFC)
2017-05-24 19:50:01 -06:00
Scott McMurray 094d61f079 Stop returning k from [T]::rotate 2017-05-21 03:05:19 -07:00
Scott McMurray a92ad5e52a Update slice_rotate to a real tracking number 2017-05-21 01:55:43 -07:00
Scott McMurray c05676b97f Add an in-place rotate method for slices to libcore
A helpful primitive for moving chunks of data around inside a slice.
In particular, adding elements to the end of a Vec then moving them
somewhere else, as a way to do efficient multiple-insert.  (There's
drain for efficient block-remove, but no easy way to block-insert.)

Talk with another example: <https://youtu.be/qH6sSOr-yk8?t=560>
2017-05-21 01:55:43 -07:00
Scott McMurray f166bd9857 Make RangeInclusive just a two-field struct
Not being an enum improves ergonomics, especially since NonEmpty could be Empty.  It can still be iterable without an extra "done" bit by making the range have !(start <= end), which is even possible without changing the Step trait.

Implements RFC 1980
2017-05-21 01:48:03 -07:00
Oliver Middleton 2f703e4304 Correct some stability versions
These were found by running tidy on stable versions of rust and finding
features stabilised with the wrong version numbers.
2017-05-20 08:38:39 +01:00
Scott McMurray 1f891d11f5 Improve implementation approach comments in [T]::reverse() 2017-05-05 18:54:47 -07:00
Scott McMurray e8fad325fe Make [u8]::reverse() 5x faster
Since LLVM doesn't vectorize the loop for us, do unaligned reads
of a larger type and use LLVM's bswap intrinsic to do the
reversing of the actual bytes.  cfg!-restricted to x86 and
x86_64, as I assume it wouldn't help on things like ARMv5.

Also makes [u16]::reverse() a more modest 1.5x faster by
loading/storing u32 and swapping the u16s with ROT16.

Thank you ptr::*_unaligned for making this easy :)
2017-05-04 20:28:34 -07:00
Henri Sivonen e36f59e1a2 Explain why zero-length slices require a non-null pointer 2017-04-28 12:25:02 +03:00
Ulrik Sverdrup 5d2f270395 slice: Implement .rfind() for slice iterators Iter and IterMut
Just like the forward case find, implement rfind explicitly
2017-04-08 03:45:48 +02:00
Ariel Ben-Yehuda d8b61091f6 Rollup merge of #41065 - jorendorff:slice-rsplit-41020, r=alexcrichton
[T]::rsplit() and rsplit_mut(), #41020
2017-04-05 23:01:13 +00:00
Ariel Ben-Yehuda 9d074473da Rollup merge of #40943 - Amanieu:offset_to, r=alexcrichton
Add ptr::offset_to

This PR adds a method to calculate the signed distance (in number of elements) between two pointers. The resulting value can then be passed to `offset` to get one pointer from the other. This is similar to pointer subtraction in C/C++.

There are 2 special cases:

- If the distance is not a multiple of the element size then the result is rounded towards zero. (in C/C++ this is UB)
-  ZST return `None`, while normal types return `Some(isize)`. This forces the user to handle the ZST case in unsafe code. (C/C++ doesn't have ZSTs)
2017-04-05 23:01:08 +00:00
Jason Orendorff a45fedfa38 simplify implementation of [T]::splitn and friends #41020 2017-04-04 13:40:56 -05:00
Jason Orendorff 2e3f0d8451 add [T]::rsplit() and rsplit_mut() #41020 2017-04-04 13:40:26 -05:00
Amanieu d'Antras 7b89bd7cca Add ptr::offset_to 2017-04-03 01:36:56 +01:00
bors a9329d3aa3 Auto merge of #40737 - nagisa:safe-slicing-strs, r=BurntSushi
Checked slicing for strings

cc https://github.com/rust-lang/rust/issues/39932
2017-03-31 11:13:20 +00:00
Simonas Kazlauskas 2f0dd63bbe Checked (and unchecked) slicing for strings?
What is this magic‽
2017-03-22 18:43:01 +02:00
Stjepan Glavina d6da1d9b46 Various fixes to wording consistency in the docs 2017-03-22 17:19:52 +01:00
Stjepan Glavina a718051f63 Unit test heapsort 2017-03-21 20:46:20 +01:00
Stjepan Glavina f1913e2a30 Implement feature sort_unstable 2017-03-21 20:46:20 +01:00