Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature] Optimize does not work on Colorant{T,1} #12

Open
kunzaatko opened this issue Apr 3, 2022 · 3 comments
Open

[feature] Optimize does not work on Colorant{T,1} #12

kunzaatko opened this issue Apr 3, 2022 · 3 comments

Comments

@kunzaatko
Copy link

kunzaatko commented Apr 3, 2022

When trying to fit an image of a bead, it is most commonly represented as an array of colorants. The most likely for this use in particular is for it to be some subtype of AbstractGray. But when trying to fit some kind of Gray array, there it throws:

ERROR: MethodError: no method matching DiffResults.DiffResult(::Gray{Float64}, ::Vector{Float64})
Closest candidates are:
DiffResults.DiffResult(::Union{Number, AbstractArray}, ::Union{Number, AbstractArray}...) at ~/.julia/packages/DiffResults/wASAy/src/DiffResults.jl:52
Stacktrace:
[1] (::NLSolversBase.var"#14#18"{Gray{Float64}, PSFModels.var"#_loss#42"{NamedTuple{(), Tuple{}}, typeof(abs2), Float64, typeof(gaussian), Matrix{Gray{Float64}}, Tuple{Int64, Int64}, Tuple{Int64, Int64}, CartesianIndices{2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, NTuple{4, Symbol}}, ForwardDiff.GradientConfig{ForwardDiff.Tag{PSFModels.var"#_loss#42"{NamedTuple{(), Tuple{}}, typeof(abs2), Float64, typeof(gaussian), Matrix{Gray{Float64}}, Tuple{Int64, Int64}, Tuple{Int64, Int64}, CartesianIndices{2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, NTuple{4, Symbol}}, Gray{Float64}}, Gray{Float64}, 5, Vector{ForwardDiff.Dual{ForwardDiff.Tag{PSFModels.var"#_loss#42"{NamedTuple{(), Tuple{}}, typeof(abs2), Float64, typeof(gaussian), Matrix{Gray{Float64}}, Tuple{Int64, Int64}, Tuple{Int64, Int64}, CartesianIndices{2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, NTuple{4, Symbol}}, Gray{Float64}}, Gray{Float64}, 5}}}})(out::Vector{Float64}, x::Vector{Gray{Float64}})
@ NLSolversBase ~/.julia/packages/NLSolversBase/cfJrN/src/objective_types/oncedifferentiable.jl:69
[2] value_gradient!!(obj::NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Gray{Float64}}}, x::Vector{Gray{Float64}})
@ NLSolversBase ~/.julia/packages/NLSolversBase/cfJrN/src/interface.jl:82
[3] initial_state(method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, options::Optim.Options{Float64, Nothing}, d::NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Gray{Float64}}}, initial_x::Vector{Gray{Float64}})
@ Optim ~/.julia/packages/Optim/wFOeG/src/multivariate/solvers/first_order/l_bfgs.jl:164
[4] optimize(d::NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Gray{Float64}}}, initial_x::Vector{Gray{Float64}}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, options::Optim.Options{Float64, Nothing})
@ Optim ~/.julia/packages/Optim/wFOeG/src/multivariate/optimize/optimize.jl:36
[5] optimize(f::Function, initial_x::Vector{Gray{Float64}}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, options::Optim.Options{Float64, Nothing}; inplace::Bool, autodiff::Symbol)
@ Optim ~/.julia/packages/Optim/wFOeG/src/multivariate/optimize/interface.jl:142
[6] fit(model::typeof(gaussian), params::NamedTuple{(:x, :y, :fwhm, :amp), Tuple{Float64, Float64, Tuple{Int64, Int64}, Int64}}, image::Matrix{Gray{Float64}}, inds::Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}; func_kwargs::NamedTuple{(), Tuple{}}, loss::typeof(abs2), alg::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, maxfwhm::Float64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ PSFModels ~/.julia/packages/PSFModels/G3WuK/src/fitting.jl:99
[7] fit (repeats 2 times)
@ ~/.julia/packages/PSFModels/G3WuK/src/fitting.jl:76 [inlined]
[8] top-level scope
@ REPL[167]:1

Could there be definition such as:

function fit(..., image::AbstractArray{T}, ...) where {T<:AbstractGray}
    fit(..., Real.(image), ...)
end

Do you think it is a good idea?

@mileslucas
Copy link
Member

mileslucas commented Apr 4, 2022

This may just be a limitation of ForwardDiff.jl

Could you try with

fit(...; autodiff=:finite)

which should automatically pass to Optim.minimize?

My hesitation with defining that function is that I would then have to depend on color types package just for the single method dispatch, when it seems simpler for the end-user to convert themselves.

@kunzaatko
Copy link
Author

Yeah... The same problem... with that:

PSFModels.fit(gaussian,(x = 7.5, y = 7.5), bead_view; autodiff=:finite)

ERROR: MethodError: no method matching default_relstep(::Val{:central}, ::Type{Gray{Float64}})
Closest candidates are:
default_relstep(::Type, ::Any) at ~/.julia/packages/FiniteDiff/OGdW5/src/epsilons.jl:25
default_relstep(::Val{fdtype}, ::Type{T}) where {fdtype, T<:Number} at ~/.julia/packages/FiniteDiff/OGdW5/src/epsilons.jl:26
Stacktrace:
[1] finite_difference_gradient!(df::Vector{Float64}, f::Function, x::Vector{Gray{Float64}}, cache::FiniteDiff.GradientCache{Nothing, Nothing, Nothing, Vector{Gray{Float64}}, Val{:central}(), Float64, Val{true}()})
@ FiniteDiff ~/.julia/packages/FiniteDiff/OGdW5/src/gradients.jl:138
[2] (::NLSolversBase.var"#g!#15"{PSFModels.var"#_loss#42"{NamedTuple{(), Tuple{}}, typeof(abs2), Float64, typeof(gaussian), Matrix{Gray{Float64}}, Tuple{Int64, Int64}, Tuple{Int64, Int64}, CartesianIndices{2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Tuple{Symbol, Symbol}}, FiniteDiff.GradientCache{Nothing, Nothing, Nothing, Vector{Gray{Float64}}, Val{:central}(), Float64, Val{true}()}})(storage::Vector{Float64}, x::Vector{Gray{Float64}})
@ NLSolversBase ~/.julia/packages/NLSolversBase/cfJrN/src/objective_types/oncedifferentiable.jl:57
[3] (::NLSolversBase.var"#fg!#16"{PSFModels.var"#_loss#42"{NamedTuple{(), Tuple{}}, typeof(abs2), Float64, typeof(gaussian), Matrix{Gray{Float64}}, Tuple{Int64, Int64}, Tuple{Int64, Int64}, CartesianIndices{2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Tuple{Symbol, Symbol}}})(storage::Vector{Float64}, x::Vector{Gray{Float64}})
@ NLSolversBase ~/.julia/packages/NLSolversBase/cfJrN/src/objective_types/oncedifferentiable.jl:61
[4] value_gradient!!(obj::NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Gray{Float64}}}, x::Vector{Gray{Float64}})
@ NLSolversBase ~/.julia/packages/NLSolversBase/cfJrN/src/interface.jl:82
[5] initial_state(method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, options::Optim.Options{Float64, Nothing}, d::NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Gray{Float64}}}, initial_x::Vector{Gray{Float64}})
@ Optim ~/.julia/packages/Optim/wFOeG/src/multivariate/solvers/first_order/l_bfgs.jl:164
[6] optimize(d::NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Gray{Float64}}}, initial_x::Vector{Gray{Float64}}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, options::Optim.Options{Float64, Nothing})
@ Optim ~/.julia/packages/Optim/wFOeG/src/multivariate/optimize/optimize.jl:36
[7] optimize(f::Function, initial_x::Vector{Gray{Float64}}, method::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, options::Optim.Options{Float64, Nothing}; inplace::Bool, autodiff::Symbol)
@ Optim ~/.julia/packages/Optim/wFOeG/src/multivariate/optimize/interface.jl:142
[8] fit(model::typeof(gaussian), params::NamedTuple{(:x, :y), Tuple{Float64, Float64}}, image::Matrix{Gray{Float64}}, inds::Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}; func_kwargs::NamedTuple{(), Tuple{}}, loss::typeof(abs2), alg::Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Optim.var"#19#21"}, maxfwhm::Float64, kwargs::Base.Pairs{Symbol, Symbol, Tuple{Symbol}, NamedTuple{(:autodiff,), Tuple{Symbol}}})
@ PSFModels ~/.julia/packages/PSFModels/G3WuK/src/fitting.jl:99
[9] top-level scope
@ REPL[28]:1

@kunzaatko
Copy link
Author

kunzaatko commented Apr 6, 2022

Maybe you do not have to pull the dependency and just if eltype is something where not <: Real it could be blindly converted. Like so:

function fit(..., image::AbstractArray{T},....) where {T<:Real}
 # standard implementation
end

fit(..., image::AbstractArray{T},...) where {T} = fit(..., Real.(image), ...)

Then mutliple dispatch will select the first when it is more specific and the second for any other...
But I do not know whether any other types than numeric types are supported with AutoDiff packages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants