diff --git a/doc/docs/NLopt_Algorithms.md b/doc/docs/NLopt_Algorithms.md index dc760bcf..149ffc82 100644 --- a/doc/docs/NLopt_Algorithms.md +++ b/doc/docs/NLopt_Algorithms.md @@ -309,7 +309,7 @@ At each point **x**, MMA forms a local approximation using the gradient of *f* a (If you contact [Professor Svanberg](https://people.kth.se/~krille/), he has been willing in the past to graciously provide you with his original code, albeit under restrictions on commercial use or redistribution. The MMA implementation in NLopt, however, is completely independent of Svanberg's, whose code we have not examined; any bugs are my own, of course.) -I also implemented another CCSA algorithm from the same paper, `NLOPT_LD_CCSAQ`: instead of constructing local MMA approximations, it constructs simple quadratic approximations (or rather, affine approximations plus a quadratic penalty term to stay conservative). This is the ccsa_quadratic code. It seems to have similar convergence rates to MMA for most problems, which is not surprising as they are both essentially similar. However, for the quadratic variant I implemented the possibility of [preconditioning](NLopt_Reference.md#preconditioning-with-approximate-hessians): including a user-supplied Hessian approximation in the local model. It is easy to incorporate this into the proof in Svanberg's paper, and to show that global convergence is still guaranteed as long as the user's "Hessian" is positive semidefinite, and it practice it can greatly improve convergence if the preconditioner is a good approximation for the real Hessian (at least for the eigenvectors of the largest eigenvalues). +I also implemented another CCSA algorithm from the same paper, `NLOPT_LD_CCSAQ`: instead of constructing local MMA approximations, it constructs simple quadratic approximations (or rather, affine approximations plus a quadratic penalty term to stay conservative). This is the ccsa_quadratic code. It seems to have similar convergence rates to MMA for most problems, which is not surprising as they are both essentially similar. However, for the quadratic variant I implemented the possibility of [preconditioning](NLopt_Reference.md#preconditioning-with-approximate-hessians): including a user-supplied Hessian approximation in the local model. It is easy to incorporate this into the proof in Svanberg's paper, and to show that global convergence is still guaranteed as long as the user's "Hessian" is positive semidefinite, and in practice it can greatly improve convergence if the preconditioner is a good approximation for the real Hessian (at least for the eigenvectors of the largest eigenvalues). The `NLOPT_LD_MMA` and `NLOPT_LD_CCSAQ` algorithms support the following internal parameters, which can be specified using the [`nlopt_set_param` API](NLopt_Reference#algorithm-specific-parameters): diff --git a/doc/docs/NLopt_Reference.md b/doc/docs/NLopt_Reference.md index d7204978..238b1487 100644 --- a/doc/docs/NLopt_Reference.md +++ b/doc/docs/NLopt_Reference.md @@ -243,7 +243,7 @@ nlopt_result nlopt_set_x_weights1(nlopt_opt opt, const double w); nlopt_result nlopt_get_x_weights(const nlopt_opt opt, double *w); ``` -Set/get the weights used when the computing L₁ norm for the `xtol_rel` stopping criterion above, where `*w` must point to an array of length equal to the number of optimization parameters in `opt`. `nlopt_set_x_weights1` can be used to set all of the weights to the same value `w`. Also passing NULL to `nlopt_set_xtol_abs` allows to unset all the weights. The weights default to `1`, but non-constant weights can be used to handle situations where the different parameters `x` have different units or importance, for example. +Set/get the weights used when the computing L₁ norm for the `xtol_rel` stopping criterion above, where `*w` must point to an array of length equal to the number of optimization parameters in `opt`. `nlopt_set_x_weights1` can be used to set all of the weights to the same value `w`. Also passing NULL to `nlopt_set_x_weights` allows to unset all the weights. The weights default to `1`, but non-constant weights can be used to handle situations where the different parameters `x` have different units or importance, for example. ```c nlopt_result nlopt_set_xtol_abs(nlopt_opt opt, const double *tol); @@ -414,7 +414,7 @@ NLOPT_FORCED_STOP = -5 Halted because of a [forced termination](#forced-termination): the user called `nlopt_force_stop(opt)` on the optimization’s `nlopt_opt` object `opt` from the user’s objective function or constraints. -An string with further details about the error is available through `nlopt_get_errmsg` if an error is set: +A string with further details about the error is available through `nlopt_get_errmsg` if an error is set: ```c const char * nlopt_get_errmsg(nlopt_opt opt); ```