Skip to content

Conversation

@mateuszbaran
Copy link
Member

@mateuszbaran mateuszbaran commented Nov 28, 2025

This is a WIP PR that implements the GCP idea to L-BFGS and maybe some other solvers, so that they can be used in the presence of box constraints on the Euclidean part of the manifold. The constraints are handled similarly to L-BFGS-B (although not all Euclidean tricks are applied).

TODO:

  • a full example of working with a box domain
  • code coverage

The implementation was prepared in collaboration with @paprzybysz .

@mateuszbaran mateuszbaran added enhancement WIP Work in Progress (for a pull request) labels Nov 28, 2025
@codecov
Copy link

codecov bot commented Nov 28, 2025

Codecov Report

❌ Patch coverage is 98.20628% with 8 lines in your changes missing coverage. Please review.
✅ Project coverage is 99.92%. Comparing base (38cd7de) to head (7bd9c3b).
⚠️ Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
src/plans/stepsize/stepsize.jl 86.36% 3 Missing ⚠️
src/solvers/quasi_Newton.jl 94.54% 3 Missing ⚠️
src/plans/box_plan.jl 99.26% 2 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##            master     #554      +/-   ##
===========================================
- Coverage   100.00%   99.92%   -0.08%     
===========================================
  Files           91       92       +1     
  Lines         9975    10401     +426     
===========================================
+ Hits          9975    10393     +418     
- Misses           0        8       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@kellertuer
Copy link
Member

The name is still a bit clumsy, but I do not have much of a better idea.
I think, though, that usually we put the change after the noun (cf. StopWhenGradientChangeLess, so could we switch that here as well and write CostChangeLess instead of ChangeCostLess?

@mateuszbaran
Copy link
Member Author

Sure, I've changed the name.

@mateuszbaran
Copy link
Member Author

Documenter.jl is failing with a very weird error :/

ERROR: LoadError: AssertionError: normpath(entry.path) == normpath(path)

@kellertuer
Copy link
Member

It switched from 12.2 to 12.3 maybe just a caching issue.

@mateuszbaran
Copy link
Member Author

Hm, ARC is now failing with out of memory exception:

A few solver runs: Error During Test at /Users/runner/work/Manopt.jl/Manopt.jl/test/solvers/test_adaptive_regularization_with_cubics.jl:144
  Got exception outside of a @test
  OutOfMemoryError()
  Stacktrace:
    [1] umferror(status::Int32)
      @ SparseArrays.UMFPACK ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/SparseArrays/src/solvers/umfpack.jl:113
    [2] macro expansion
      @ ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/SparseArrays/src/solvers/umfpack.jl:624 [inlined]
    [3] macro expansion
      @ ./lock.jl:376 [inlined]
    [4] umfpack_numeric!(U::SparseArrays.UMFPACK.UmfpackLU{Float64, Int64}; reuse_numeric::Bool, q::Nothing)
      @ SparseArrays.UMFPACK ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/SparseArrays/src/solvers/umfpack.jl:615
    [5] umfpack_numeric!
      @ ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/SparseArrays/src/solvers/umfpack.jl:614 [inlined]
    [6] #lu#9
      @ ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/SparseArrays/src/solvers/umfpack.jl:373 [inlined]
    [7] lu
      @ ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/SparseArrays/src/solvers/umfpack.jl:369 [inlined]
    [8] \(A::Hermitian{Float64, SparseArrays.SparseMatrixCSC{Float64, Int64}}, B::Vector{Float64})
      @ SparseArrays.CHOLMOD ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/SparseArrays/src/solvers/cholmod.jl:1953
    [9] \(A::SparseArrays.SparseMatrixCSC{Float64, Int64}, B::Vector{Float64})
      @ SparseArrays ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/SparseArrays/src/linalg.jl:2023
   [10] min_cubic_Newton!(mp::DefaultManoptProblem{Fiber{ℝ, TangentSpaceType, Grassmann{ℝ, ManifoldsBase.TypeParameter{Tuple{8, 3}}}, Matrix{Float64}}, AdaptiveRegularizationWithCubicsModelObjective{AllocatingEvaluation, ManifoldHessianObjective{AllocatingEvaluation, var"#f#f##34"{Symmetric{Float64, Matrix{Float64}}}, var"#grad_f#grad_f##18"{Symmetric{Float64, Matrix{Float64}}}, var"#Hess_f#Hess_f##4"{Symmetric{Float64, Matrix{Float64}}}, Manopt.var"#544#545"}, Float64}}, ls::LanczosState{Matrix{Float64}, Float64, StopWhenAny{Tuple{StopAfterIteration, StopWhenFirstOrderProgress{Float64}}}, StopAfterIteration, Vector{Matrix{Float64}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Float64}}, k::Int64)
      @ Manopt ~/work/Manopt.jl/Manopt.jl/src/solvers/Lanczos.jl:211
   [11] step_solver!(dmp::DefaultManoptProblem{Fiber{ℝ, TangentSpaceType, Grassmann{ℝ, ManifoldsBase.TypeParameter{Tuple{8, 3}}}, Matrix{Float64}}, AdaptiveRegularizationWithCubicsModelObjective{AllocatingEvaluation, ManifoldHessianObjective{AllocatingEvaluation, var"#f#f##34"{Symmetric{Float64, Matrix{Float64}}}, var"#grad_f#grad_f##18"{Symmetric{Float64, Matrix{Float64}}}, var"#Hess_f#Hess_f##4"{Symmetric{Float64, Matrix{Float64}}}, Manopt.var"#544#545"}, Float64}}, ls::LanczosState{Matrix{Float64}, Float64, StopWhenAny{Tuple{StopAfterIteration, StopWhenFirstOrderProgress{Float64}}}, StopAfterIteration, Vector{Matrix{Float64}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Float64}}, k::Int64)
      @ Manopt ~/work/Manopt.jl/Manopt.jl/src/solvers/Lanczos.jl:180
   [12] solve!(p::DefaultManoptProblem{Fiber{ℝ, TangentSpaceType, Grassmann{ℝ, ManifoldsBase.TypeParameter{Tuple{8, 3}}}, Matrix{Float64}}, AdaptiveRegularizationWithCubicsModelObjective{AllocatingEvaluation, ManifoldHessianObjective{AllocatingEvaluation, var"#f#f##34"{Symmetric{Float64, Matrix{Float64}}}, var"#grad_f#grad_f##18"{Symmetric{Float64, Matrix{Float64}}}, var"#Hess_f#Hess_f##4"{Symmetric{Float64, Matrix{Float64}}}, Manopt.var"#544#545"}, Float64}}, s::LanczosState{Matrix{Float64}, Float64, StopWhenAny{Tuple{StopAfterIteration, StopWhenFirstOrderProgress{Float64}}}, StopAfterIteration, Vector{Matrix{Float64}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Float64}})
      @ Manopt ~/work/Manopt.jl/Manopt.jl/src/solvers/solver.jl:162
   [13] solve_arc_subproblem!
      @ ~/work/Manopt.jl/Manopt.jl/src/solvers/adaptive_regularization_with_cubics.jl:485 [inlined]
   [14] step_solver!(dmp::DefaultManoptProblem{Grassmann{ℝ, ManifoldsBase.TypeParameter{Tuple{8, 3}}}, ManifoldHessianObjective{AllocatingEvaluation, var"#f#f##34"{Symmetric{Float64, Matrix{Float64}}}, var"#grad_f#grad_f##18"{Symmetric{Float64, Matrix{Float64}}}, var"#Hess_f#Hess_f##4"{Symmetric{Float64, Matrix{Float64}}}, Manopt.var"#544#545"}}, arcs::AdaptiveRegularizationState{Matrix{Float64}, Matrix{Float64}, DefaultManoptProblem{Fiber{ℝ, TangentSpaceType, Grassmann{ℝ, ManifoldsBase.TypeParameter{Tuple{8, 3}}}, Matrix{Float64}}, AdaptiveRegularizationWithCubicsModelObjective{AllocatingEvaluation, ManifoldHessianObjective{AllocatingEvaluation, var"#f#f##34"{Symmetric{Float64, Matrix{Float64}}}, var"#grad_f#grad_f##18"{Symmetric{Float64, Matrix{Float64}}}, var"#Hess_f#Hess_f##4"{Symmetric{Float64, Matrix{Float64}}}, Manopt.var"#544#545"}, Float64}}, LanczosState{Matrix{Float64}, Float64, StopWhenAny{Tuple{StopAfterIteration, StopWhenFirstOrderProgress{Float64}}}, StopAfterIteration, Vector{Matrix{Float64}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Float64}}, StopWhenAny{Tuple{StopAfterIteration, StopWhenGradientNormLess{typeof(norm), Float64, Missing}, StopWhenAllLanczosVectorsUsed}}, Float64, PolarRetraction}, k::Int64)
      @ Manopt ~/work/Manopt.jl/Manopt.jl/src/solvers/adaptive_regularization_with_cubics.jl:457
   [15] solve!(p::DefaultManoptProblem{Grassmann{ℝ, ManifoldsBase.TypeParameter{Tuple{8, 3}}}, ManifoldHessianObjective{AllocatingEvaluation, var"#f#f##34"{Symmetric{Float64, Matrix{Float64}}}, var"#grad_f#grad_f##18"{Symmetric{Float64, Matrix{Float64}}}, var"#Hess_f#Hess_f##4"{Symmetric{Float64, Matrix{Float64}}}, Manopt.var"#544#545"}}, s::AdaptiveRegularizationState{Matrix{Float64}, Matrix{Float64}, DefaultManoptProblem{Fiber{ℝ, TangentSpaceType, Grassmann{ℝ, ManifoldsBase.TypeParameter{Tuple{8, 3}}}, Matrix{Float64}}, AdaptiveRegularizationWithCubicsModelObjective{AllocatingEvaluation, ManifoldHessianObjective{AllocatingEvaluation, var"#f#f##34"{Symmetric{Float64, Matrix{Float64}}}, var"#grad_f#grad_f##18"{Symmetric{Float64, Matrix{Float64}}}, var"#Hess_f#Hess_f##4"{Symmetric{Float64, Matrix{Float64}}}, Manopt.var"#544#545"}, Float64}}, LanczosState{Matrix{Float64}, Float64, StopWhenAny{Tuple{StopAfterIteration, StopWhenFirstOrderProgress{Float64}}}, StopAfterIteration, Vector{Matrix{Float64}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Float64}}, StopWhenAny{Tuple{StopAfterIteration, StopWhenGradientNormLess{typeof(norm), Float64, Missing}, StopWhenAllLanczosVectorsUsed}}, Float64, PolarRetraction})
      @ Manopt ~/work/Manopt.jl/Manopt.jl/src/solvers/solver.jl:162
   [16] adaptive_regularization_with_cubics!(M::Grassmann{ℝ, ManifoldsBase.TypeParameter{Tuple{8, 3}}}, mho::ManifoldHessianObjective{AllocatingEvaluation, var"#f#f##34"{Symmetric{Float64, Matrix{Float64}}}, var"#grad_f#grad_f##18"{Symmetric{Float64, Matrix{Float64}}}, var"#Hess_f#Hess_f##4"{Symmetric{Float64, Matrix{Float64}}}, Manopt.var"#544#545"}, p::Matrix{Float64}; debug::Vector{Any}, evaluation::AllocatingEvaluation, initial_tangent_vector::Matrix{Float64}, maxIterLanczos::Int64, objective_type::Symbol, ρ_regularization::Float64, retraction_method::PolarRetraction, σmin::Float64, σ::Float64, η1::Float64, η2::Float64, γ1::Float64, γ2::Float64, θ::Float64, sub_kwargs::@NamedTuple{}, sub_stopping_criterion::StopWhenAny{Tuple{StopAfterIteration, StopWhenFirstOrderProgress{Float64}}}, sub_state::LanczosState{Matrix{Float64}, Float64, StopWhenAny{Tuple{StopAfterIteration, StopWhenFirstOrderProgress{Float64}}}, StopAfterIteration, Vector{Matrix{Float64}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Float64}}, sub_objective::Nothing, sub_problem::Nothing, stopping_criterion::StopWhenAny{Tuple{StopAfterIteration, StopWhenGradientNormLess{typeof(norm), Float64, Missing}, StopWhenAllLanczosVectorsUsed}}, kwargs::@Kwargs{})
      @ Manopt ~/work/Manopt.jl/Manopt.jl/src/solvers/adaptive_regularization_with_cubics.jl:436
   [17] adaptive_regularization_with_cubics!(M::Grassmann{ℝ, ManifoldsBase.TypeParameter{Tuple{8, 3}}}, f::Function, grad_f::Function, Hess_f::var"#Hess_f#Hess_f##4"{Symmetric{Float64, Matrix{Float64}}}, p::Matrix{Float64}; evaluation::AllocatingEvaluation, kwargs::@Kwargs{θ::Float64, σ::Float64, retraction_method::PolarRetraction})
      @ Manopt ~/work/Manopt.jl/Manopt.jl/src/solvers/adaptive_regularization_with_cubics.jl:355
   [18] top-level scope
      @ ~/work/Manopt.jl/Manopt.jl/test/solvers/test_adaptive_regularization_with_cubics.jl:5
   [19] macro expansion
      @ ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/Test/src/Test.jl:1776 [inlined]
   [20] macro expansion
      @ ~/work/Manopt.jl/Manopt.jl/test/solvers/test_adaptive_regularization_with_cubics.jl:145 [inlined]
   [21] macro expansion
      @ ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/Test/src/Test.jl:1776 [inlined]
   [22] macro expansion
      @ ~/work/Manopt.jl/Manopt.jl/test/solvers/test_adaptive_regularization_with_cubics.jl:181 [inlined]
   [23] include(mapexpr::Function, mod::Module, _path::String)
      @ Base ./Base.jl:307
   [24] top-level scope
      @ ~/work/Manopt.jl/Manopt.jl/test/runtests.jl:4
   [25] macro expansion
      @ ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/Test/src/Test.jl:1776 [inlined]
   [26] macro expansion
      @ ~/work/Manopt.jl/Manopt.jl/test/runtests.jl:44 [inlined]
   [27] macro expansion
      @ ~/hostedtoolcache/julia/1.12.3/x64/share/julia/stdlib/v1.12/Test/src/Test.jl:1776 [inlined]
   [28] macro expansion
      @ ~/work/Manopt.jl/Manopt.jl/test/runtests.jl:44 [inlined]
   [29] include(mapexpr::Function, mod::Module, _path::String)
      @ Base ./Base.jl:307
   [30] top-level scope
      @ none:6
   [31] eval(m::Module, e::Any)
      @ Core ./boot.jl:489
   [32] exec_options(opts::Base.JLOptions)
      @ Base ./client.jl:283
   [33] _start()
      @ Base ./client.jl:550

And the Documenter.jl error persists :/.

@kellertuer
Copy link
Member

Documenter you found ;) I asked that on Slack actually.

For ARC – I have no ideas, I also have not yet fully checked how many thing you changed here that might have side effects.

@mateuszbaran
Copy link
Member Author

Yes, I've noticed you have a workaround in your PR.

The ARC error disappeared... for now at least. And I hope I've resolved all unintended side effects of my changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement WIP Work in Progress (for a pull request)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Allow specifying a small positive constant as the lower limit of y-s products in QN

3 participants