You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pre-story to this question is the following operation within quantile_function helper for 1d solvers: link. Transpose is only well-defined for 2D PyTorch tensors: for 1D it basically does nothing and for 3D and higher it's deprecated (will be removed in future). Would it be fair to restrict quantile_function to work only for 1D and 2D tensors (e.g. raising ValueError). In tests it's never called with wider tensors.
Same question goes for searchsorted implementation in the backend.py module for backends other than Torch and TF: for example this one. Seems like implementation only supports 1D or 2D use case, should we throw ValueError if something else is given?
Also, maybe it makes sense to define quantile as a backend function with default implementation so we can provide different implementation for Torch to avoid ad-hoc conditioning on the name of the backend?
Happy to make PR if this is the right course of action.
The text was updated successfully, but these errors were encountered:
Describe the bug
Pre-story to this question is the following operation within
quantile_function
helper for 1d solvers: link. Transpose is only well-defined for 2D PyTorch tensors: for 1D it basically does nothing and for 3D and higher it's deprecated (will be removed in future). Would it be fair to restrictquantile_function
to work only for 1D and 2D tensors (e.g. raisingValueError
). In tests it's never called with wider tensors.Same question goes for
searchsorted
implementation in thebackend.py
module for backends other than Torch and TF: for example this one. Seems like implementation only supports 1D or 2D use case, should we throwValueError
if something else is given?Also, maybe it makes sense to define
quantile
as a backend function with default implementation so we can provide different implementation for Torch to avoid ad-hoc conditioning on the name of the backend?Happy to make PR if this is the right course of action.
The text was updated successfully, but these errors were encountered: