/*************************************************************************
CLASSIC LEVENBERG-MARQUARDT METHOD FOR NON-LINEAR OPTIMIZATION
DESCRIPTION:
This function is used to find minimum of function which is represented as
sum of squares:
F(x) = f[0]^2(x[0],...,x[n-1]) + ... + f[m-1]^2(x[0],...,x[n-1])
using value of F(), function vector f[] and Jacobian of f[]. Classic
Levenberg-Marquardt method is used.
REQUIREMENTS:
This algorithm will request following information during its operation:
* function value F at given point X
* function vector f[] and Jacobian of f[] (simultaneously) at given point
There are several overloaded versions of MinLMOptimize() function which
correspond to different LM-like optimization algorithms provided by this
unit. You should choose version which accepts func() and jac() function
pointers. First pointer is used to calculate F at given point, second one
calculates calculates f[] and Jacobian df[i]/dx[j].
You can try to initialize MinLMState structure with FJ function and then
use incorrect version of MinLMOptimize() (for example, version which
works with general form function and does not provide Jacobian), but it
will lead to exception being thrown after first attempt to calculate
Jacobian.
USAGE:
1. User initializes algorithm state with MinLMCreateFJ() call
2. User tunes solver parameters with MinLMSetCond(), MinLMSetStpMax() and
other functions
3. User calls MinLMOptimize() function which takes algorithm state and
pointers (delegates, etc.) to callback functions.
4. User calls MinLMResults() to get solution
5. Optionally, user may call MinLMRestartFrom() to solve another problem
with same N/M but another starting point and/or another function.
MinLMRestartFrom() allows to reuse already initialized structure.
INPUT PARAMETERS:
N - dimension, N>1
* if given, only leading N elements of X are used
* if not given, automatically determined from size of X
M - number of functions f[i]
X - initial solution, array[0..N-1]
OUTPUT PARAMETERS:
State - structure which stores algorithm state
See also MinLMIteration, MinLMResults.
NOTES:
1. you may tune stopping conditions with MinLMSetCond() function
2. if target function contains exp() or other fast growing functions, and
optimization algorithm makes too large steps which leads to overflow,
use MinLMSetStpMax() function to bound algorithm's steps.
-- ALGLIB --
Copyright 30.03.2009 by Bochkanov Sergey
*************************************************************************/
public static void minlmcreatefj(int n,
int m,
double[] x,
minlmstate state)
{
int i_ = 0;
ap.assert(n>=1, "MinLMCreateFJ: N<1!");
ap.assert(m>=1, "MinLMCreateFJ: M<1!");
ap.assert(ap.len(x)>=n, "MinLMCreateFJ: Length(X)<N!");
ap.assert(apserv.isfinitevector(x, n), "MinLMCreateFJ: X contains infinite or NaN values!");
//
// prepare internal structures
//
lmprepare(n, m, true, state);
//
// initialize, check parameters
//
minlmsetcond(state, 0, 0, 0, 0);
minlmsetxrep(state, false);
minlmsetstpmax(state, 0);
state.n = n;
state.m = m;
state.flags = 0;
state.usermode = lmmodefj;
state.wrongparams = false;
if( n<1 )
{
state.wrongparams = true;
return;
}
for(i_=0; i_<=n-1;i_++)
{
state.x[i_] = x[i_];
}
//.........这里部分代码省略.........
/*************************************************************************
This is obsolete function.
Since ALGLIB 3.3 it is equivalent to MinLMCreateFJ().
-- ALGLIB --
Copyright 30.03.2009 by Bochkanov Sergey
*************************************************************************/
public static void minlmcreatefgj(int n,
int m,
double[] x,
minlmstate state)
{
minlmcreatefj(n, m, x, state);
}
/*************************************************************************
This subroutine turns on verification of the user-supplied analytic
gradient:
* user calls this subroutine before optimization begins
* MinLMOptimize() is called
* prior to actual optimization, for each function Fi and each component
of parameters being optimized X[j] algorithm performs following steps:
* two trial steps are made to X[j]-TestStep*S[j] and X[j]+TestStep*S[j],
where X[j] is j-th parameter and S[j] is a scale of j-th parameter
* if needed, steps are bounded with respect to constraints on X[]
* Fi(X) is evaluated at these trial points
* we perform one more evaluation in the middle point of the interval
* we build cubic model using function values and derivatives at trial
points and we compare its prediction with actual value in the middle
point
* in case difference between prediction and actual value is higher than
some predetermined threshold, algorithm stops with completion code -7;
Rep.VarIdx is set to index of the parameter with incorrect derivative,
Rep.FuncIdx is set to index of the function.
* after verification is over, algorithm proceeds to the actual optimization.
NOTE 1: verification needs N (parameters count) Jacobian evaluations. It
is very costly and you should use it only for low dimensional
problems, when you want to be sure that you've correctly
calculated analytic derivatives. You should not use it in the
production code (unless you want to check derivatives provided
by some third party).
NOTE 2: you should carefully choose TestStep. Value which is too large
(so large that function behaviour is significantly non-cubic) will
lead to false alarms. You may use different step for different
parameters by means of setting scale with MinLMSetScale().
NOTE 3: this function may lead to false positives. In case it reports that
I-th derivative was calculated incorrectly, you may decrease test
step and try one more time - maybe your function changes too
sharply and your step is too large for such rapidly chanding
function.
INPUT PARAMETERS:
State - structure used to store algorithm state
TestStep - verification step:
* TestStep=0 turns verification off
* TestStep>0 activates verification
-- ALGLIB --
Copyright 15.06.2012 by Bochkanov Sergey
*************************************************************************/
public static void minlmsetgradientcheck(minlmstate state,
double teststep)
{
alglib.ap.assert(math.isfinite(teststep), "MinLMSetGradientCheck: TestStep contains NaN or Infinite");
alglib.ap.assert((double)(teststep)>=(double)(0), "MinLMSetGradientCheck: invalid argument TestStep(TestStep<0)");
state.teststep = teststep;
}
/*************************************************************************
Levenberg-Marquardt algorithm results
INPUT PARAMETERS:
State - algorithm state
OUTPUT PARAMETERS:
X - array[0..N-1], solution
Rep - optimization report; includes termination codes and
additional information. Termination codes are listed below,
see comments for this structure for more info.
Termination code is stored in rep.terminationtype field:
* -7 derivative correctness check failed;
see rep.wrongnum, rep.wrongi, rep.wrongj for
more information.
* -3 constraints are inconsistent
* 1 relative function improvement is no more than
EpsF.
* 2 relative step is no more than EpsX.
* 4 gradient is no more than EpsG.
* 5 MaxIts steps was taken
* 7 stopping conditions are too stringent,
further improvement is impossible
* 8 terminated by user who called minlmrequesttermination().
X contains point which was "current accepted" when
termination request was submitted.
-- ALGLIB --
Copyright 10.03.2009 by Bochkanov Sergey
*************************************************************************/
public static void minlmresults(minlmstate state,
ref double[] x,
minlmreport rep)
{
x = new double[0];
minlmresultsbuf(state, ref x, rep);
}
/*************************************************************************
This subroutine restarts LM algorithm from new point. All optimization
parameters are left unchanged.
This function allows to solve multiple optimization problems (which
must have same number of dimensions) without object reallocation penalty.
INPUT PARAMETERS:
State - structure used for reverse communication previously
allocated with MinLMCreateXXX call.
X - new starting point.
-- ALGLIB --
Copyright 30.07.2010 by Bochkanov Sergey
*************************************************************************/
public static void minlmrestartfrom(minlmstate state,
double[] x)
{
int i_ = 0;
alglib.ap.assert(alglib.ap.len(x)>=state.n, "MinLMRestartFrom: Length(X)<N!");
alglib.ap.assert(apserv.isfinitevector(x, state.n), "MinLMRestartFrom: X contains infinite or NaN values!");
for(i_=0; i_<=state.n-1;i_++)
{
state.xbase[i_] = x[i_];
}
state.rstate.ia = new int[4+1];
state.rstate.ba = new bool[0+1];
state.rstate.ra = new double[2+1];
state.rstate.stage = -1;
clearrequestfields(state);
}
/*************************************************************************
This function sets maximum step length
INPUT PARAMETERS:
State - structure which stores algorithm state between calls and
which is used for reverse communication. Must be
initialized with MinCGCreate???()
StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't
want to limit step length.
Use this subroutine when you optimize target function which contains exp()
or other fast growing functions, and optimization algorithm makes too
large steps which leads to overflow. This function allows us to reject
steps that are too large (and therefore expose us to the possible
overflow) without actually calculating function value at the x+stp*d.
NOTE: non-zero StpMax leads to moderate performance degradation because
intermediate step of preconditioned L-BFGS optimization is incompatible
with limits on step size.
-- ALGLIB --
Copyright 02.04.2010 by Bochkanov Sergey
*************************************************************************/
public static void minlmsetstpmax(ref minlmstate state,
double stpmax)
{
System.Diagnostics.Debug.Assert((double)(stpmax)>=(double)(0), "MinLMSetStpMax: StpMax<0!");
state.stpmax = stpmax;
}
/*************************************************************************
This function is used to change acceleration settings
You can choose between three acceleration strategies:
* AccType=0, no acceleration.
* AccType=1, secant updates are used to update quadratic model after each
iteration. After fixed number of iterations (or after model breakdown)
we recalculate quadratic model using analytic Jacobian or finite
differences. Number of secant-based iterations depends on optimization
settings: about 3 iterations - when we have analytic Jacobian, up to 2*N
iterations - when we use finite differences to calculate Jacobian.
AccType=1 is recommended when Jacobian calculation cost is prohibitive
high (several Mx1 function vector calculations followed by several NxN
Cholesky factorizations are faster than calculation of one M*N Jacobian).
It should also be used when we have no Jacobian, because finite difference
approximation takes too much time to compute.
Table below list optimization protocols (XYZ protocol corresponds to
MinLMCreateXYZ) and acceleration types they support (and use by default).
ACCELERATION TYPES SUPPORTED BY OPTIMIZATION PROTOCOLS:
protocol 0 1 comment
V + +
VJ + +
FGH +
DAFAULT VALUES:
protocol 0 1 comment
V x without acceleration it is so slooooooooow
VJ x
FGH x
NOTE: this function should be called before optimization. Attempt to call
it during algorithm iterations may result in unexpected behavior.
NOTE: attempt to call this function with unsupported protocol/acceleration
combination will result in exception being thrown.
-- ALGLIB --
Copyright 14.10.2010 by Bochkanov Sergey
*************************************************************************/
public static void minlmsetacctype(minlmstate state,
int acctype)
{
alglib.ap.assert((acctype==0 || acctype==1) || acctype==2, "MinLMSetAccType: incorrect AccType!");
if( acctype==2 )
{
acctype = 0;
}
if( acctype==0 )
{
state.maxmodelage = 0;
state.makeadditers = false;
return;
}
if( acctype==1 )
{
alglib.ap.assert(state.hasfi, "MinLMSetAccType: AccType=1 is incompatible with current protocol!");
if( state.algomode==0 )
{
state.maxmodelage = 2*state.n;
}
else
{
state.maxmodelage = smallmodelage;
}
state.makeadditers = false;
return;
}
}
/*************************************************************************
CLASSIC LEVENBERG-MARQUARDT METHOD FOR NON-LINEAR OPTIMIZATION
Optimization using Jacobi matrix. Algorithm - classic Levenberg-Marquardt
method.
Function F is represented as sum of squares:
F = f[0]^2(x[0],...,x[n-1]) + ... + f[m-1]^2(x[0],...,x[n-1])
EXAMPLE
See HTML-documentation.
INPUT PARAMETERS:
N - dimension, N>1
M - number of functions f[i]
X - initial solution, array[0..N-1]
OUTPUT PARAMETERS:
State - structure which stores algorithm state between subsequent
calls of MinLMIteration. Used for reverse communication.
This structure should be passed to MinLMIteration subroutine.
See also MinLMIteration, MinLMResults.
NOTES:
1. you may tune stopping conditions with MinLMSetCond() function
2. if target function contains exp() or other fast growing functions, and
optimization algorithm makes too large steps which leads to overflow,
use MinLMSetStpMax() function to bound algorithm's steps.
-- ALGLIB --
Copyright 30.03.2009 by Bochkanov Sergey
*************************************************************************/
public static void minlmcreatefj(int n,
int m,
ref double[] x,
ref minlmstate state)
{
int i_ = 0;
//
// Prepare RComm
//
state.rstate.ia = new int[3+1];
state.rstate.ba = new bool[0+1];
state.rstate.ra = new double[7+1];
state.rstate.stage = -1;
//
// prepare internal structures
//
lmprepare(n, m, true, ref state);
//
// initialize, check parameters
//
minlmsetcond(ref state, 0, 0, 0, 0);
minlmsetxrep(ref state, false);
minlmsetstpmax(ref state, 0);
state.n = n;
state.m = m;
state.flags = 0;
state.usermode = lmmodefj;
state.wrongparams = false;
if( n<1 )
{
state.wrongparams = true;
return;
}
for(i_=0; i_<=n-1;i_++)
{
state.x[i_] = x[i_];
}
}
/*************************************************************************
This function sets stopping conditions for Levenberg-Marquardt optimization
algorithm.
INPUT PARAMETERS:
State - structure which stores algorithm state between calls and
which is used for reverse communication. Must be initialized
with MinLMCreate???()
EpsG - >=0
The subroutine finishes its work if the condition
||G||<EpsG is satisfied, where ||.|| means Euclidian norm,
G - gradient.
EpsF - >=0
The subroutine finishes its work if on k+1-th iteration
the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1}
is satisfied.
EpsX - >=0
The subroutine finishes its work if on k+1-th iteration
the condition |X(k+1)-X(k)| <= EpsX is fulfilled.
MaxIts - maximum number of iterations. If MaxIts=0, the number of
iterations is unlimited. Only Levenberg-Marquardt
iterations are counted (L-BFGS/CG iterations are NOT
counted because their cost is very low copared to that of
LM).
Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to
automatic stopping criterion selection (small EpsX).
-- ALGLIB --
Copyright 02.04.2010 by Bochkanov Sergey
*************************************************************************/
public static void minlmsetcond(ref minlmstate state,
double epsg,
double epsf,
double epsx,
int maxits)
{
System.Diagnostics.Debug.Assert((double)(epsg)>=(double)(0), "MinLMSetCond: negative EpsG!");
System.Diagnostics.Debug.Assert((double)(epsf)>=(double)(0), "MinLMSetCond: negative EpsF!");
System.Diagnostics.Debug.Assert((double)(epsx)>=(double)(0), "MinLMSetCond: negative EpsX!");
System.Diagnostics.Debug.Assert(maxits>=0, "MinLMSetCond: negative MaxIts!");
if( (double)(epsg)==(double)(0) & (double)(epsf)==(double)(0) & (double)(epsx)==(double)(0) & maxits==0 )
{
epsx = 1.0E-6;
}
state.epsg = epsg;
state.epsf = epsf;
state.epsx = epsx;
state.maxits = maxits;
}
/*************************************************************************
Levenberg-Marquardt algorithm results
Called after MinLMIteration returned False.
Input parameters:
State - algorithm state (used by MinLMIteration).
Output parameters:
X - array[0..N-1], solution
Rep - optimization report:
* Rep.TerminationType completetion code:
* -1 incorrect parameters were specified
* 1 relative function improvement is no more than
EpsF.
* 2 relative step is no more than EpsX.
* 4 gradient is no more than EpsG.
* 5 MaxIts steps was taken
* 7 stopping conditions are too stringent,
further improvement is impossible
* Rep.IterationsCount contains iterations count
* Rep.NFunc - number of function calculations
* Rep.NJac - number of Jacobi matrix calculations
* Rep.NGrad - number of gradient calculations
* Rep.NHess - number of Hessian calculations
* Rep.NCholesky - number of Cholesky decomposition calculations
-- ALGLIB --
Copyright 10.03.2009 by Bochkanov Sergey
*************************************************************************/
public static void minlmresults(ref minlmstate state,
ref double[] x,
ref minlmreport rep)
{
int i_ = 0;
x = new double[state.n-1+1];
for(i_=0; i_<=state.n-1;i_++)
{
x[i_] = state.x[i_];
}
rep.iterationscount = state.repiterationscount;
rep.terminationtype = state.repterminationtype;
rep.nfunc = state.repnfunc;
rep.njac = state.repnjac;
rep.ngrad = state.repngrad;
rep.nhess = state.repnhess;
rep.ncholesky = state.repncholesky;
}
/*************************************************************************
Prepare internal structures (except for RComm).
Note: M must be zero for FGH mode, non-zero for FJ/FGJ mode.
*************************************************************************/
private static void lmprepare(int n,
int m,
bool havegrad,
minlmstate state)
{
if( n<=0 | m<0 )
{
return;
}
if( havegrad )
{
state.g = new double[n-1+1];
}
if( m!=0 )
{
state.j = new double[m-1+1, n-1+1];
state.fi = new double[m-1+1];
state.h = new double[0+1, 0+1];
}
else
{
state.j = new double[0+1, 0+1];
state.fi = new double[0+1];
state.h = new double[n-1+1, n-1+1];
}
state.x = new double[n-1+1];
state.rawmodel = new double[n-1+1, n-1+1];
state.model = new double[n-1+1, n-1+1];
state.xbase = new double[n-1+1];
state.xprec = new double[n-1+1];
state.gbase = new double[n-1+1];
state.xdir = new double[n-1+1];
state.xprev = new double[n-1+1];
state.work = new double[Math.Max(n, m)+1];
}
/*************************************************************************
This function sets scaling coefficients for LM optimizer.
ALGLIB optimizers use scaling matrices to test stopping conditions (step
size and gradient are scaled before comparison with tolerances). Scale of
the I-th variable is a translation invariant measure of:
a) "how large" the variable is
b) how large the step should be to make significant changes in the function
Generally, scale is NOT considered to be a form of preconditioner. But LM
optimizer is unique in that it uses scaling matrix both in the stopping
condition tests and as Marquardt damping factor.
Proper scaling is very important for the algorithm performance. It is less
important for the quality of results, but still has some influence (it is
easier to converge when variables are properly scaled, so premature
stopping is possible when very badly scalled variables are combined with
relaxed stopping conditions).
INPUT PARAMETERS:
State - structure stores algorithm state
S - array[N], non-zero scaling coefficients
S[i] may be negative, sign doesn't matter.
-- ALGLIB --
Copyright 14.01.2011 by Bochkanov Sergey
*************************************************************************/
public static void minlmsetscale(minlmstate state,
double[] s)
{
int i = 0;
alglib.ap.assert(alglib.ap.len(s)>=state.n, "MinLMSetScale: Length(S)<N");
for(i=0; i<=state.n-1; i++)
{
alglib.ap.assert(math.isfinite(s[i]), "MinLMSetScale: S contains infinite or NAN elements");
alglib.ap.assert((double)(s[i])!=(double)(0), "MinLMSetScale: S contains zero elements");
state.s[i] = Math.Abs(s[i]);
}
}
/*************************************************************************
One Levenberg-Marquardt iteration.
Called after inialization of State structure with MinLMXXX subroutine.
See HTML docs for examples.
Input parameters:
State - structure which stores algorithm state between subsequent
calls and which is used for reverse communication. Must be
initialized with MinLMXXX call first.
If subroutine returned False, iterative algorithm has converged.
If subroutine returned True, then:
* if State.NeedF=True, - function value F at State.X[0..N-1]
is required
* if State.NeedFG=True - function value F and gradient G
are required
* if State.NeedFiJ=True - function vector f[i] and Jacobi matrix J
are required
* if State.NeedFGH=True - function value F, gradient G and Hesian H
are required
* if State.XUpdated=True - algorithm reports about new iteration,
State.X contains current point,
State.F contains function value.
One and only one of this fields can be set at time.
Results are stored:
* function value - in MinLMState.F
* gradient - in MinLMState.G[0..N-1]
* Jacobi matrix - in MinLMState.J[0..M-1,0..N-1]
* Hessian - in MinLMState.H[0..N-1,0..N-1]
-- ALGLIB --
Copyright 10.03.2009 by Bochkanov Sergey
*************************************************************************/
public static bool minlmiteration(ref minlmstate state)
{
bool result = new bool();
int n = 0;
int m = 0;
int i = 0;
double stepnorm = 0;
bool spd = new bool();
double fbase = 0;
double fnew = 0;
double lambda = 0;
double nu = 0;
double lambdaup = 0;
double lambdadown = 0;
int lbfgsflags = 0;
double v = 0;
int i_ = 0;
//
// Reverse communication preparations
// I know it looks ugly, but it works the same way
// anywhere from C++ to Python.
//
// This code initializes locals by:
// * random values determined during code
// generation - on first subroutine call
// * values from previous call - on subsequent calls
//
if( state.rstate.stage>=0 )
{
n = state.rstate.ia[0];
m = state.rstate.ia[1];
i = state.rstate.ia[2];
lbfgsflags = state.rstate.ia[3];
spd = state.rstate.ba[0];
stepnorm = state.rstate.ra[0];
fbase = state.rstate.ra[1];
fnew = state.rstate.ra[2];
lambda = state.rstate.ra[3];
nu = state.rstate.ra[4];
lambdaup = state.rstate.ra[5];
lambdadown = state.rstate.ra[6];
v = state.rstate.ra[7];
}
else
{
n = -983;
m = -989;
i = -834;
lbfgsflags = 900;
spd = true;
stepnorm = 364;
fbase = 214;
fnew = -338;
lambda = -686;
nu = 912;
lambdaup = 585;
lambdadown = 497;
v = -271;
}
if( state.rstate.stage==0 )
{
//.........这里部分代码省略.........
/*************************************************************************
This function sets boundary constraints for LM optimizer
Boundary constraints are inactive by default (after initial creation).
They are preserved until explicitly turned off with another SetBC() call.
INPUT PARAMETERS:
State - structure stores algorithm state
BndL - lower bounds, array[N].
If some (all) variables are unbounded, you may specify
very small number or -INF (latter is recommended because
it will allow solver to use better algorithm).
BndU - upper bounds, array[N].
If some (all) variables are unbounded, you may specify
very large number or +INF (latter is recommended because
it will allow solver to use better algorithm).
NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th
variable will be "frozen" at X[i]=BndL[i]=BndU[i].
NOTE 2: this solver has following useful properties:
* bound constraints are always satisfied exactly
* function is evaluated only INSIDE area specified by bound constraints
or at its boundary
-- ALGLIB --
Copyright 14.01.2011 by Bochkanov Sergey
*************************************************************************/
public static void minlmsetbc(minlmstate state,
double[] bndl,
double[] bndu)
{
int i = 0;
int n = 0;
n = state.n;
alglib.ap.assert(alglib.ap.len(bndl)>=n, "MinLMSetBC: Length(BndL)<N");
alglib.ap.assert(alglib.ap.len(bndu)>=n, "MinLMSetBC: Length(BndU)<N");
for(i=0; i<=n-1; i++)
{
alglib.ap.assert(math.isfinite(bndl[i]) || Double.IsNegativeInfinity(bndl[i]), "MinLMSetBC: BndL contains NAN or +INF");
alglib.ap.assert(math.isfinite(bndu[i]) || Double.IsPositiveInfinity(bndu[i]), "MinLMSetBC: BndU contains NAN or -INF");
state.bndl[i] = bndl[i];
state.havebndl[i] = math.isfinite(bndl[i]);
state.bndu[i] = bndu[i];
state.havebndu[i] = math.isfinite(bndu[i]);
}
}
/*************************************************************************
IMPROVED LEVENBERG-MARQUARDT METHOD FOR
NON-LINEAR LEAST SQUARES OPTIMIZATION
DESCRIPTION:
This function is used to find minimum of function which is represented as
sum of squares:
F(x) = f[0]^2(x[0],...,x[n-1]) + ... + f[m-1]^2(x[0],...,x[n-1])
using value of function vector f[] and Jacobian of f[].
REQUIREMENTS:
This algorithm will request following information during its operation:
* function vector f[] at given point X
* function vector f[] and Jacobian of f[] (simultaneously) at given point
There are several overloaded versions of MinLMOptimize() function which
correspond to different LM-like optimization algorithms provided by this
unit. You should choose version which accepts fvec() and jac() callbacks.
First one is used to calculate f[] at given point, second one calculates
f[] and Jacobian df[i]/dx[j].
You can try to initialize MinLMState structure with VJ function and then
use incorrect version of MinLMOptimize() (for example, version which
works with general form function and does not provide Jacobian), but it
will lead to exception being thrown after first attempt to calculate
Jacobian.
USAGE:
1. User initializes algorithm state with MinLMCreateVJ() call
2. User tunes solver parameters with MinLMSetCond(), MinLMSetStpMax() and
other functions
3. User calls MinLMOptimize() function which takes algorithm state and
callback functions.
4. User calls MinLMResults() to get solution
5. Optionally, user may call MinLMRestartFrom() to solve another problem
with same N/M but another starting point and/or another function.
MinLMRestartFrom() allows to reuse already initialized structure.
INPUT PARAMETERS:
N - dimension, N>1
* if given, only leading N elements of X are used
* if not given, automatically determined from size of X
M - number of functions f[i]
X - initial solution, array[0..N-1]
OUTPUT PARAMETERS:
State - structure which stores algorithm state
NOTES:
1. you may tune stopping conditions with MinLMSetCond() function
2. if target function contains exp() or other fast growing functions, and
optimization algorithm makes too large steps which leads to overflow,
use MinLMSetStpMax() function to bound algorithm's steps.
-- ALGLIB --
Copyright 30.03.2009 by Bochkanov Sergey
*************************************************************************/
public static void minlmcreatevj(int n,
int m,
double[] x,
minlmstate state)
{
ap.assert(n>=1, "MinLMCreateVJ: N<1!");
ap.assert(m>=1, "MinLMCreateVJ: M<1!");
ap.assert(ap.len(x)>=n, "MinLMCreateVJ: Length(X)<N!");
ap.assert(apserv.isfinitevector(x, n), "MinLMCreateVJ: X contains infinite or NaN values!");
//
// initialize, check parameters
//
state.n = n;
state.m = m;
state.algomode = 1;
state.hasf = false;
state.hasfi = true;
state.hasg = false;
//
// second stage of initialization
//
lmprepare(n, m, false, state);
minlmsetacctype(state, 0);
minlmsetcond(state, 0, 0, 0, 0);
minlmsetxrep(state, false);
minlmsetstpmax(state, 0);
minlmrestartfrom(state, x);
}
请发表评论