Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Iterator::array_chunks method #87776

Closed
wants to merge 8 commits into from
Closed

Conversation

johnschug
Copy link

This adds an array_chunks (and array_rchunks) method to the iterator trait similar to the method with the same name on slices. Issue #81615 seems to indicate this would be useful.

@rust-highfive
Copy link
Collaborator

Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @joshtriplett (or someone else) soon.

Please see the contribution instructions for more information.

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Aug 4, 2021
library/core/src/iter/adapters/array_chunks.rs Outdated Show resolved Hide resolved
self.init = 0;
// SAFETY: This is safe: `MaybeUninit<T>` is guaranteed to have the same layout
// as `T` and the entire array has just been initialized.
unsafe { Some(mem::transmute_copy(&self.buffer)) }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reusing memory (going through self) and then transmute-copying it to return it seems like it would optimize badly. Benchmarks exercising this adapter together with a for-in loop as baseline would help.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added some benchmarks comparing it to looping over a mapped iterator producing arrays and the ArrayChunks iterator for slices.

test iter::bench_array_map                                      ... bench:     437,783 ns/iter (+/- 13,664)
test iter::bench_iter_array_chunks                              ... bench:   2,684,281 ns/iter (+/- 206,116)
test iter::bench_iter_array_chunks_fold                         ... bench:   2,217,095 ns/iter (+/- 127,330)
test iter::bench_slice_array_chunks                             ... bench:     335,405 ns/iter (+/- 99,203)

This comment was marked as resolved.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I managed to improve the performance by quite a bit. Now it only takes twice as long as passing an array through an iterator pipeline. I also removed the transmute_copy (slight increase in performance).

test iter::bench_array_map                                      ... bench:     412,536 ns/iter (+/- 19,784)
test iter::bench_iter_array_chunks_fold                         ... bench:     856,663 ns/iter (+/- 67,238)
test iter::bench_iter_array_chunks_loop                         ... bench:   1,678,215 ns/iter (+/- 78,573)
test iter::bench_iter_array_chunks_ref_fold                     ... bench:   1,511,018 ns/iter (+/- 86,835)
test iter::bench_slice_array_chunks                             ... bench:     337,726 ns/iter (+/- 83,062)

@johnschug johnschug changed the title Add Iterator::{array_chunks, array_rchunks} methods Add Iterator::array_chunks method Aug 5, 2021
/// [`array_rchunks`]: Iterator::array_rchunks
#[unstable(feature = "iter_array_chunks", issue = "none")]
#[derive(Debug)]
pub struct ArrayRChunks<I: DoubleEndedIterator, const N: usize> {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is array_rchunks actually redundant? Seems to me it's distinct from both .rev().array_chunks() (because the elements within the chunks are in the original order) and .array_chunks().rev() (when .remainder().len() != 0).

Copy link
Author

@johnschug johnschug Aug 6, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's mostly redundant with the exception of the remainder its equivalent to:
.rev().array_chunks().map(|mut arr| { arr.reverse(); arr})

If it still seems useful, I can re-include it. Also ArrayChunks doesn't currently implement DoubleEndedIterator, although it could but it might produce surprising results if advanced from both directions. If it were implemented though, I think it could produce the same results as ArrayRChunks including the remainder.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have now added a DoubleEndedIterator implementation that seems to produce the same results array_rchunks would. It's not quite as efficient as it could be though, just to keep complexity down and share more code with the forward implementation.

Copy link
Member

@the8472 the8472 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This starts to look like it's containing half of an implementation of the fabled ArrayVec, as the FIXME notes this also contains some redundancy with the some array code.

library/core/src/iter/adapters/array_chunks.rs Outdated Show resolved Hide resolved
library/core/src/iter/adapters/array_chunks.rs Outdated Show resolved Hide resolved
@johnschug
Copy link
Author

I have made ArrayChunks unconditionally fused now, and added a DoubleEndedIterator implementation (with benchmarks) to replace ArrayRChunks. I have also unified the Guard implementation with the one in array.

@johnschug
Copy link
Author

This starts to look like it's containing half of an implementation of the fabled ArrayVec, as the FIXME notes this also contains some redundancy with the some array code.

Yeah, ArrayVec would definitely make the implementation a lot simpler. I think it would make it slower too though. For whatever reason using an array that is nested in a struct with a Drop implementation seems to optimize really poorly. That was the major difference between the first and second set of benchmarks (removing the transmute_copy also shaved about 100μs off).

@jackh726 jackh726 added the T-libs-api Relevant to the library API team, which will review and decide on the PR/issue. label Aug 22, 2021
@johnschug
Copy link
Author

Are there any other changes this needs? Should I open a tracking issue for this?

@JohnCSimon JohnCSimon added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Sep 13, 2021
@JohnCSimon
Copy link
Member

JohnCSimon commented Oct 3, 2021

Are there any other changes this needs? Should I open a tracking issue for this?

@joshtriplett

@bors
Copy link
Contributor

bors commented Oct 9, 2021

☔ The latest upstream changes (presumably #89703) made this pull request unmergeable. Please resolve the merge conflicts.

@johnschug johnschug closed this Oct 15, 2021
@the8472
Copy link
Member

the8472 commented Oct 30, 2021

Was this closed due to issues with the code or just lack of time? That would be relevant if someone wants to pick it up.

@JohnCSimon
Copy link
Member

Was this closed due to issues with the code or just lack of time? That would be relevant if someone wants to pick it up.

@johnschug ?

@johnschug
Copy link
Author

@the8472 It was a bit of both. I think someone else is working on a partial implementation of ArrayVec for core and consolidating the existing wrappers. Although, I can't seem to find the post again. This PR overlaps quite a bit with that work and this function would be trivial to add (and without unsafe code) with such an implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-libs-api Relevant to the library API team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants