Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement]Replace numpy ascontiguousarray with torch contiguous to speed-up #2604

Merged
merged 5 commits into from
Feb 15, 2023
Merged

[Enhancement]Replace numpy ascontiguousarray with torch contiguous to speed-up #2604

merged 5 commits into from
Feb 15, 2023

Conversation

csatsurnh
Copy link
Collaborator

@csatsurnh csatsurnh commented Feb 15, 2023

Motivation

Original motivation was after MMDetection PR #9533

With several experiments I found out that if a ndarray is contiguous, numpy.transpose + torch.contiguous perform better, while if not, then use numpy.ascontiguousarray + numpy.transpose

Modification

Replace numpy.ascontiguousarray with torch.contiguous in PackSegInputs

@CLAassistant
Copy link

CLAassistant commented Feb 15, 2023

CLA assistant check
All committers have signed the CLA.

@csatsurnh csatsurnh changed the base branch from 1.x to dev-1.x February 15, 2023 04:03
@codecov
Copy link

codecov bot commented Feb 15, 2023

Codecov Report

Base: 83.25% // Head: 83.35% // Increases project coverage by +0.09% 🎉

Coverage data is based on head (73cf60d) compared to base (7ac0888).
Patch coverage: 100.00% of modified lines in pull request are covered.

❗ Current head 73cf60d differs from pull request most recent head 53afded. Consider uploading reports for the commit 53afded to get more accurate results

Additional details and impacted files
@@             Coverage Diff             @@
##           dev-1.x    #2604      +/-   ##
===========================================
+ Coverage    83.25%   83.35%   +0.09%     
===========================================
  Files          145      145              
  Lines         8505     8508       +3     
  Branches      1273     1274       +1     
===========================================
+ Hits          7081     7092      +11     
+ Misses        1213     1202      -11     
- Partials       211      214       +3     
Flag Coverage Δ
unittests 83.35% <100.00%> (+0.09%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmseg/datasets/transforms/formatting.py 89.74% <100.00%> (+0.85%) ⬆️
mmseg/datasets/transforms/transforms.py 90.53% <0.00%> (+1.03%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@MeowZheng MeowZheng changed the title Replace numpy ascontiguousarray with torch contiguous to speed-up [Enhancement]Replace numpy ascontiguousarray with torch contiguous to speed-up Feb 15, 2023
@MeowZheng MeowZheng merged commit 2e27f8b into open-mmlab:dev-1.x Feb 15, 2023
@csatsurnh csatsurnh deleted the ascontiguousarray-contiguous branch March 13, 2023 03:25
aravind-h-v pushed a commit to aravind-h-v/mmsegmentation that referenced this pull request Mar 27, 2023
nahidnazifi87 pushed a commit to nahidnazifi87/mmsegmentation_playground that referenced this pull request Apr 5, 2024
… speed-up (open-mmlab#2604)

## Motivation

Original motivation was after [MMDetection PR
#9533](open-mmlab/mmdetection#9533)

With several experiments I found out that if a ndarray is contiguous,
numpy.transpose + torch.contiguous perform better, while if not, then
use numpy.ascontiguousarray + numpy.transpose

## Modification

Replace numpy.ascontiguousarray with torch.contiguous in
[PackSegInputs](https:/open-mmlab/mmsegmentation/blob/1.x/mmseg/datasets/transforms/formatting.py)

Co-authored-by: MeowZheng <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants